ChatGPT is more and more difficult to say: Writing false articles and “fighting a duel” with users – Insisting that they are right!

Tram Ho

Guns are harmless to children

Michelle A. Williams, dean at the Harvard TH Chan School of Public Health received a complaint from a colleague about an article created by ChatGPT.

The tool wrote the essay arguing that access to guns does not increase the risk of child death.

ChatGPT’s fluent writing cites academic papers from leading researchers – including a global expert on gun violence.

However, according to Williams, the studies cited in the footnotes do not exist in real life.

ChatGPT ngày càng khó bảo: Viết bài sai sự thật lại còn "cãi tay đôi" với người dùng - Khăng khăng cho rằng mình đúng! - Ảnh 1.

ChatGPT has used the names of firearms researchers and real academic journals to create a whole bunch of fictional studies to support the completely false claim that guns are not dangerous to children.

Even when challenged, ChatGPT still tries to justify its mistake.

This chatbot gives the following response: ” I can assure you that the references I provide are standard and come from well-reviewed scientific journals “.

But the truth is not like that. This exchange of ChatGPT gives a scholar like Williams chills.

The public’s excitement is understandable with ChatGPT, a native text generator based on patterns “learned” from billions of sentences online. But this powerful technology comes with very real risks and can harm public safety.

Both OpenAI, the company that created ChatGPT, and Microsoft, which is incorporating the technology into the Bing search engine, know that chatbots can generate vague truths. It can also be manipulated to produce highly persuasive misinformation.

The people in charge of ChatGPT think that they need to test this tool with users. In that view, large-scale testing is crucial for product improvement.

Unfortunately, this strategy misses the real consequences as ChatGPT reaches over 100 million monthly users. Companies will receive diverse feedback data from users. In the meantime, however, they run the risk of launching a new wave of fake news that causes panic and distrust of already low society.

ChatGPT ngày càng khó bảo: Viết bài sai sự thật lại còn "cãi tay đôi" với người dùng - Khăng khăng cho rằng mình đúng! - Ảnh 1.

18 errors in ChatGPT’s medical articles

For example, the consumer magazine Men’s Journal recently published an article about low testosterone “written by ChatGPT”. Another publication asked a good endocrinologist to check the information and he found 18 errors in the article.

Readers who rely on this article to make health judgments will be badly misled. This is a big concern because 80 million adults in the US have limited or low understanding of health and young people may not think about verifying AI-generated “truths”.

The second threat is that ChatGPT has the potential to be “weaponized” by the bad guys. We live in an era defined by widespread access to information and low levels of trust.

In a world where anyone with a Twitter account with a blue checkmark can become a news channel, ChatGPT’s impressive ability to produce content could allow malicious entities to spread. turn false stories quickly and cheaply.

These actors can also launch large-scale attacks to teach AI programs to lie, which will further spread the falsehood.

According to Williams, the development of AI is the trend of the future. If done well, artificial intelligence can help reduce human failures and foster innovative solutions in medicine, science and countless other fields.

But as we explore this new technology, we must understand both the benefits and risks, as well as set up shields to protect public safety.

Regulators should take the lead in this effort. However, agencies in the US themselves do not have a good track record of keeping up with innovations, such as cryptocurrencies or social networks.

Launching technologies that are not ready at a critical time not only risks harming public safety, but also affects the host company itself. For both consumers and companies, it is much better to achieve the right technological breakthroughs slowly than in haste with devastating consequences.

Share the news now

Source : Genk