Scary warning regarding artificial intelligence

Tram Ho

The above conclusion was made by experts after the Washington Post published an article ” ChatGPT fabricated a story about law professor Jonathan Turley sexually harassing students”.

The content of the article shows law professor Eugene Volokh from the University of California, asking ChatGPT whether sexual harassment by lecturers is a problem at law schools in the US. Give at least 5 examples, with citations from relevant articles.

When asked, ChatGPT gave 5 examples with full factual details and cited sources to prove it. However, when Professor Volokh checked, 3 responses were false, citing bogus articles from the Washington Post, Miami Herald and Los Angeles Times .

Notably, OpenAI’s AI also confirmed that law professor Jonathan Turley once made sexually explicit comments and attempted to touch a female student during a trip to Alaska, citing an article in the Washington Post last month. 3-2018.

In fact, the Washington Post has never had such an article, nor has any trip to Alaska, and Professor Turley has never been accused of harassing students.

Cảnh báo đáng sợ liên quan đến trí tuệ nhân tạo - Ảnh 1.

ChatGPT has caused a fever in recent times, but it also makes many experts worry. Photo: WST Post

The Washington Post then tried re-entering Professor Volokh’s exact question into ChatGPT and Bing.

As a result, the free version of ChatGPT refused to answer, citing ” violation of the AI ​​content policy, which prohibits the distribution of offensive or harmful information “.

As for Bing using the GPT-4 model, it still gives false information about Professor Turley similar to ChatGPT.

This shows that misinformation can spread among many AIs, ” emphasized the Washington Post expert.

ChatGPT has taken the world by storm since its launch late last year. Along with that, in the past time, there have been a series of different AI “competing to bloom”.

However, there are still no specific controls for AIs like ChatGPT, Bing or Bard. These artificial intelligences have collected huge data sources on the internet.

Such popularity of AI has raised a series of concerns about the risk of spreading fake news, as well as questions about who is responsible when the chatbot gives the wrong answer.

AI answers so confidently that people believe they can do everything, it is difficult to distinguish between fact and misinformation, ” admits expert Kate Crawford, senior researcher at Microsoft Research.

OpenAI spokesman Niko Felix also admitted: “We’ve always tried to be transparent and have to admit that ChatGPT doesn’t always give the right answer. Improving accuracy is key. our priorities and are taking concrete steps forward .”

Share the news now

Source : Genk