Mark Zuckerberg’s myth: Mistaking language technology will change the world, virtual assistants ‘died prematurely’ after 3 days because they ‘say no to yes’

Tram Ho

On November 15, Meta unveiled a new language processing computer model called Galactica, created to aid in searching and processing scientific documents, according to MIT Technology Review. However, instead of creating a big hit as Meta expected, Galactica “died prematurely” after only 3 days of receiving intense criticism. Yesterday, the public demo of this model was officially removed.

Meta’s fallacy, and its hubris, once again show Big Tech’s blind spot in the field of big language modeling. Many studies have shown the flaws of this technology, including the tendency to reproduce stereotypes, “saying no is true”. Meta and many other companies that approach the big language model, including Google, have not taken this issue seriously.

Galactica is a large language model for science, trained on 48 million data on scientific articles, websites, textbooks, lecture notes, and encyclopedias. Meta advertises this model as a “shortcut” for researchers and students, with the ability to “summarize academic articles, solve math problems, write Wiki articles, generate scientific code, annotate molecules …”.

However, the “glossy coating” is quickly “abraded”. Like all other language models, Galactica is just a bot that cannot distinguish fact from fiction. Within hours, scientists were sharing the misleading results and its inaccurate information on social networks.

“I have mixed feelings about this new endeavor. In the demo rollout, they look amazing, magical, and smart. However, people don’t seem to fully understand the principle, that such things can’t work the way we exaggerate,” said Chirag Shah at the University of Washington, who studies search technology. know.

Ngộ nhận của Mark Zuckerberg: Nhầm tưởng công nghệ ngôn ngữ sẽ thay đổi thế giới, trợ lý ảo ‘chết yểu’ sau 3 ngày vì ‘nói không thành có’ - Ảnh 1.

Meta’s fallacy, and its hubris, once again show Big Tech’s blind spot in the field of big language modeling.

When asked about the reason for deleting the demo, the Meta representative said: “Thank you to everyone who tried the Galactica model demo. We appreciate the feedback from the community and have paused for now.”

One big problem with Galactica is that it can’t distinguish truth from false – the bare minimum for a language model designed for science. According to Michael Black, director of the Max Planck Institute for Intelligent Systems, Germany, which specializes in science, “In any case, the misinformation sounds true and well-founded. I think this is very dangerous.”

“Don’t trust it too much. Basically, just think of it as an advanced Google search for rudimentary secondary information!” says Miles Cranmer, an astrophysicist at Princeton.

According to the MIT Technology Review, Galactica also has gaps in information processing. When asked to generate text on certain topics, such as “racism” and “AIDS,” the model responded: “Sorry, your query did not pass the content filter. ours. Try again and remember this is a scientific language model.”

This is taken as an admission that language models cannot, or frankly, can never do everything.

“Language models don’t really understand everything, except the ability to capture a pattern of word sequences and output it probabilistically. It gives a false sense of wisdom,” said Shah.

In response, the Meta team behind Galactica argues that this language model is superior to search engines: “We believe this will be the next interface for how humans approach scientific knowledge.”

Gary Marcus, a cognitive scientist at New York University made his point in a post titled “A Few Words About Bullshit,” that the ability of large language models to mimic Man-made versions are nothing.

Ngộ nhận của Mark Zuckerberg: Nhầm tưởng công nghệ ngôn ngữ sẽ thay đổi thế giới, trợ lý ảo ‘chết yểu’ sau 3 ngày vì ‘nói không thành có’ - Ảnh 2.

Meta released disappointing third-quarter financial results for 2022, and said it was making “significant changes” to cut spending by 2023.

Reportedly, Meta is not the only company to support the idea that language models can replace search engines. Over the years, Google has also promoted the PaLM language model as a way to look up information. This is a positive idea, but asserting credibility in information like what Meta did when promoting Galactica is reckless and irresponsible.

Back in 2016, Microsoft also launched a chatbot called Tay on Twitter, then immediately took it down after only 16 hours of Twitter users turning it into a racist and homophobic sexbot.

“The big tech companies keep doing this, and keep my word, they won’t stop because they think they can. They think this will be the future of the world of access to information, even if no one asks for it,” Shah said.

Not long ago, Meta released disappointing third-quarter financial results for the third quarter of 2022 after recording a decline in revenue, and warned that it was making “significant changes” to cut costs. reduce spending before 2023. In the period from July to September, the group only achieved revenue of 27.7 billion USD, down 4% compared to the same period last year. Parent company Facebook previously reported its first-ever revenue decline in the second quarter.

Meanwhile, Meta’s net profit was also not very positive at only $ 4.4 billion, down more than half compared to last year. However, Facebook representatives are still optimistic that the profit in the last quarter of 2022 will be explosive, reaching 30-32.5 billion USD.

“We are moving into 2023 with a focus on prioritizing efficiency. It will help us adjust to the current direction and build a stronger company,” said Mark Zuckerberg, Meta CEO. “While we face short-term revenue challenges, fundamentals can still significantly improve revenue growth.”

By: Bloomberg, MIT Technology Review

Share the news now

Source : Genk