Microsoft shares AI risks that are often ignored

Tram Ho

At a recent live event, Microsoft detailed the future of AI in the company’s products. According to The Verge’s blog, the company shared how AI tools from Windows and Bing can improve every aspect of life, from cooking to web design. In contrast to other companies, Microsoft has worked to address the potential negative effects of the increasingly popular AI tool.

Microsoft chia sẻ những rủi ro của AI thường bị phớt lờ - Ảnh 1.

Concerns around AI are widespread among the tech-savvy user community. Experts say self-aware artificial intelligence is likely to appear in the next or the next generation, so the public’s concern about the destructive potential of AI is completely justified. As one of the leading software companies in the world, Microsoft will definitely need to pay attention to this topic. Based on the launch of the new AI solution and the press release, Microsoft seems to really take users’ concerns about the dangers of this type of technology seriously.

Damage control in the digital future

The Verge’s Nilay Patel said: “Microsoft is having to carefully explain how to prevent its new search engine from helping to plan school shootings.” This quote summarizes an important part of the dark side of AI. When it comes to the dangers of AI, people tend to focus on sci-fi scenarios where self-improving algorithms go beyond human control and become independent entities. potentially harmful.

A more possible future is that AI will help humans do the worst and worst things. Most experts would like AI to become a part of everyday life, but humans will need ethical safeguards to prevent AI from abetting crime, terrorism, and other criminal acts.

Microsoft chia sẻ những rủi ro của AI thường bị phớt lờ - Ảnh 2.

According to Microsoft’s Sarah Bird, the company’s new AI tools come pre-programmed with these protections. The company first addressed the concern of “sentimental evil AI” by developing a “co-driver”-based solution that requires human interaction at every step.

Regarding the use of AI by bad actors, The Verge said that Microsoft will “constantly test and analyze chats to categorize them and improve vulnerabilities in the safety system”. This model of continuous updating of these tools will also take place on the user side.

According to Bird: “We are further than ever in developing solutions to reduce risk.” At the same time, the company will continuously censor search engines to ensure users can control dangerous AI features.

Share the news now

Source : Genk