OpenAI CEO feels “bad” because of ChatGPT’s security bug
- Tram Ho
According to Bloomberg, a recent technical error revealed the chat title of users with ChatGPT chatbot. One Reddit user shared that the ChatGPT account shows a list of chats with chatbots that this person has never done.
OpenAI confirmed to Bloomberg that some users saw other people’s chat headers using ChatGPT instead of their own. However, the source said, ” the content of the chats of other users was not seen “.
On March 22, Sam Altman, CEO of OpenAI shared about the security flaw of ChatGPT on Twitter. “ We had a serious problem in ChatGPT due to an open source library bug. A small percentage of users can see other people’s chat header history. We feel very bad about this error ,” Altman wrote.
Then, OpenAI temporarily disabled ChatGPT on March 21 to fix this issue. However, the “father” of ChatGPT did not specifically mention the source code name or detail how OpenAI is exploiting and using them.
While it’s unclear exactly how many people are affected, Bloomberg warns users to exercise caution when using ChatGPT and other new AI tools, many of which are still in beta.
In its ChatGPT FAQ list, OpenAI said it was unable to remove certain content from a user’s chat history, adding that people should not “share any” any sensitive information in chats with chatbots”.
Previously, this chatbot was criticized for being misinformed, not up-to-date and having a bad attitude towards users. OpenAI president Greg Brockman also admitted: “ We made mistakes. The implementation system does not reflect the values we intend to include. We are also not fast enough to resolve the issues .”
Last week, OpenAI released GPT-4, a language model that has been upgraded from ChatGPT’s version 3.5 in many respects such as accuracy, security, and multimodal input handling, in that can use an image input to output text. OpenAI highlights GPT-4 as “the latest milestone in efforts to scale deep learning”.
40% of experts surveyed by Fishbowl said they are using AI tools like ChatGPT to get work done, but up to 70% do not disclose this to their superiors. Meanwhile, Amazon, Walmart and Microsoft warned their employees not to enter confidential information into ChatGPT because of privacy concerns.
Ref: Bloomberg, Business Insider
Source : Genk