We keep playing with ChatGPT without even realizing the “chilling truth”

Tram Ho

Is ChatGPT conscious like a human?

ChatGPT and the new chatbots are so good at mimicking human interactions that they have led some to question: Is there any possibility that they are conscious?

The answer – at least for now – is no. Nearly everyone who works in the field of artificial technology is certain that ChatGPT does not exist in the way that people understand it.

But the question doesn’t end there. The so-called consciousness in the age of artificial intelligence is still controversial.

“These deep neural nets, these matrix of millions of numbers, how do you define what consciousness is. It’s kind of like terra incognita,” said Nick Bostrom, founding director of the Institute. Oxford University’s Future of Humanity, using the Latin term for the concept of “what we don’t know”.

Chúng ta cứ mải vui đùa với ChatGPT mà không hề nhận ra "sự thật rùng mình" - Ảnh 1.

The creation of artificial life has been the subject of science fiction for decades, while philosophers have spent the same amount of time examining the nature of consciousness.

Some have even argued that some AI programs should now be considered sentient (a Google engineer was fired for making such a claim).

Ilya Sutskever, co-founder of OpenAI, the company behind ChatGPT, has speculated that the algorithms behind his company’s creations may be “somewhat conscious”.

NBC News discussed with consciousness concept researchers whether an advanced chatbot might possess some degree of awareness. And if so, what moral obligation does mankind have to such a creature?

This is a very new area of ​​research ,” says Bostrom. ” There’s a ton of unfinished business .”

In purely philosophical terms, experts say the real problem lies in how you define terms and questions.

Chúng ta cứ mải vui đùa với ChatGPT mà không hề nhận ra "sự thật rùng mình" - Ảnh 2.

ChatGPT, along with programs similar to Microsoft’s search assistant, has been used to assist with tasks such as programming and writing simple text such as press releases, thanks to its ease of use and proficiency. Use English and other languages ​​convincingly.

They are often referred to as “multilingual models”, as fluency largely comes from training on the vast corpus of corpus mined from the internet. While the words are persuasive, they are not designed with precision as a priority and are often wrong when presenting a definition.

What happens when ChatGPT is conscious?

Spokesmen for ChatGPT and Microsoft both told NBC News that they follow strict ethical guidelines, but did not provide specifics about concerns that their products could develop a sense of belonging. A Microsoft spokesperson stressed that the Bing chatbot “cannot think or learn on its own.”

In a lengthy post on his website, Stephen Wolfram, a computer scientist, noted that ChatGPT and other multilingual models use math to figure out the probability of using words in context specifically, based on the text library it was trained on.

Many philosophers agree that for something to be conscious, it must have a subjective experience.

David Chalmers, co-director of New York University’s Center for Mind, Brain and Consciousness, says that while ChatGPT clearly doesn’t possess many elements of consciousness, such as sensations and independent organs, they do. It’s a complex program.

“These are like chameleons. They can adopt any new personality at any time. It’s unclear if they have underlying goals and beliefs that drive that action.” Chalmers told NBC News. But over time, these can develop a clearer sense of agency.

Chúng ta cứ mải vui đùa với ChatGPT mà không hề nhận ra "sự thật rùng mình" - Ảnh 3.

One problem that philosophers point out is that a user can ask a complex chatbot if it has an experience on its own, but we cannot trust this thing to give a reliable answer.

“They’re brilliant liars,” said Susan Schneider, founding director of Florida Atlantic University’s Center for Future Thinking.

“They’re increasingly able to interact more seamlessly with humans. These things can tell you that they feel like humans. And then 10 minutes later, in a distinct conversation, they are. would say the opposite “.

Schneider notes that current chatbots use existing human writing to describe their internal state. So one way to check if a program is conscious is to not give it access to that material and see if it can describe the subjective experience itself.

The idea that humans could create a kind of conscious being raises questions about moral obligation.

If humanity later shares the earth with a synthetic consciousness, it may force societies to radically re-evaluate some elements.

Most liberal societies agree that people should have reproductive freedom and the right to vote for representative political leadership. But that becomes thorny with computer intelligence.

“If you’re an AI that can make a million copies of yourself in 20 minutes, and then each get one vote, what happens,” Bostrom said.

Share the news now

Source : Genk