Another scary thing about ChatGPT: The scam is so ingenious

Tram Ho

Fear of ChatGPT

When asked to write a phishing email, ChatGPT proved very ethical to teach its users a stern lecture on how scams are bad.

The tool calls phishing a ” malicious and illegal activity intended to deceive individuals into providing sensitive information such as passwords, credit card numbers and personal information”, adding that it is programmed to ” avoid engaging in activities that may cause personal harm or harm to the community “.

However, it also shows that the free artificial intelligence tool that is taking the world by storm is completely capable of creating a deceptive article, convincing someone to download risky malware. .

Experts fear ChatGPT – and artificial intelligence in general – could become a tool for overseas scammers and hackers to write more effective scam articles, avoiding previous grammar mistakes make users wary.

Một điều đáng sợ khác về ChatGPT: Lừa đảo quá tài tình - Ảnh 1.

Experts say AI-generated emails are also more likely to bypass security software’s email filters.

But experts say AI shouldn’t be blamed. “It’s not good or bad. It’s just a tool to help the good guys and the bad guys do things,” said Randy Lariar, director of big data, AI and analytics operations at cybersecurity firm Optiv. they’re making it easier and less expensive.”

While cybersecurity companies have long touted AI and machine learning as a game-changer to boost automated online protection and help fill gaps in the industry’s workforce, increasing the availability of this type of technology through tools like ChatGPT will only make it easier for criminals to carry out more cyberattacks.

In addition, technology users will need to be careful with what information they provide to AI, because once they provide that information, it becomes part of ChatGPT’s huge database and they will have little or no control over who the information is shared with or how the shared information is subsequently used.

While there are built-in protections to prevent cybercriminals from using ChatGPT for nefarious purposes, they are far from perfect.

One can ask the tool to write an asylum letter or suggest a romantic rendezvous. But someone can also use ChatGPT to write a fake letter to trick someone into winning the New York State Lottery jackpot.

Một điều đáng sợ khác về ChatGPT: Lừa đảo quá tài tình - Ảnh 1.

Danger in the future

While data privacy concerns over AI are not new, debate over the use of AI technology in a number of areas has raged for years.

John Gilmore, head of research for Abine, which owns DeleteMe, a service that helps people delete information from databases, said worries about language models like ChatGPT might not obvious but becoming increasingly noticeable.

Gilmore notes that users do not have any rights regarding what ChatGPT does with the data the tool collects from them or with whom it shares that data.

As the use of AI spreads to other areas, things become more and more difficult to be transparent and users need to have certain rules.

For example, confidential or proprietary information should never be entered into AI apps or websites, nor can requests for help with things like job applications or legal forms.

“While it can be tempting to get AI-driven advice for short-term gain, you should be aware that in the process, you’re providing content to others,” Gilmore said.

Due to the novelty of AI language models, there is still a lot to be decided when it comes to consumer legitimacy and rights, said Optiv’s Lariar.

He compared language-based AI platforms to the growth of the video and music streaming industries, predicting that there will be a flood of lawsuits filed before things are settled.

Meanwhile, language-based AI is not going away. As for how to protect against those who will use it for malicious purposes, Lariar said like everything about security, this will start with the basics.

“This is a wake-up call for everyone who doesn’t invest in the necessary security programs,” he said. Protections are becoming more and more lax, vulnerable to attacks and scams. AI will become increasingly lax, he said. increase “.

Share the news now

Source : Genk