The debate is unique: robots “argue” about whether AI is beneficial or harmful, and people sit and listen and evaluate

Tram Ho

In a debate about the dangers of artificial intelligence, two stances emerge: one that highlights the supernatural computing power of AI will lead humanity to a whole new level, on the other side. asserted that AI will surpass humans and will soon be a threat to life.

You wonder why AI “hates” people? They have no emotions and don’t love and hate, eliminating people is just a logical step: humans are the only natural enemies of a conscious machine, because only we can turn off one Self-propelled machines, not to mention humans are the singularity in evolution, destroying the ecosystem that has existed in peace for billions of years.

But what happens when both sides of the debate … come from the robot? To make this one-of-a-kind debate come true, IBM developed Project Debater – The Debate Project, a robot with two conflicting stances on AI’s development journey; The two debate teams with two opposing views will be led by artificial intelligence, and each team will have two members to help.

Màn tranh luận có một không hai: Robot cãi nhau xem AI có lợi hay có hại, con người ngồi nghe và đánh giá - Ảnh 1.

With an American feminine accent, Project Debater stood in front of the crowd at Cambridge Union University (Cambridge University’s international representative), in turn drawing arguments on both sides of the debate.

The words that this robot speaks are refined from 1,100 essays that humans have put into the system before. AI uses an application called “speech by crowd” to make its own point, based on the available data. It then classifies the thesis by main ideas, eliminating repetition ideas to edit the speech for persuasion.

With the argument “robots will bring more harm than good”:

Artificial intelligence can cause a lot of harm. AI will not be able to make ethical decisions, because morality is a characteristic unique to humans , ”it said.

Companies that develop artificial intelligence still have no experience in handling databases and eliminating biases. AI will take in the prejudices available, and continue that trend over generations . ”

Take the example of the “biases” that an artificial intelligence system will learn, based on input data: if a group of people has a high crime rate, they will obviously receive few advantages. treat / mimic when judged by an inanimate machine coldly.

When arguing about “more harm than good”, the AI ​​still has more or less errors. Sometimes it was repeated, sometimes failing to provide evidence for its assertion.

Màn tranh luận có một không hai: Robot cãi nhau xem AI có lợi hay có hại, con người ngồi nghe và đánh giá - Ảnh 2.

With the argument “robots will bring more benefits than harm”:

In addition to common arguments such as superior computing power that will solve a series of deadlock in modern society, it asserts that AI will create more jobs in certain industries, and ” increase productivity.” for the work area ”. But then it made a different point: ” The ability to take care of patients, teaching children of robots will make the demand of recruiting people lower “.

The end result: the team “more favorably harmed” won with a close score, with 51.22% of the voters saying that “gain more than harm” was more persuasive.

According to engineer Noam Slonim, IBM’s intention in building this system is to develop a speech-by-crowd artificial intelligence tool to receive feedback on a certain issue. For example, this could be a referendum, or a survey of employee satisfaction in a large corporation.

This technology can help us build interesting and effective communication channels, between decision makers and those directly affected by that decision, ” says engineer Slonim.

Share the news now

Source : Trí Thức Trẻ