Countries race to manage AI

Tram Ho

The rapid advances of artificial intelligence (AI) in recent years have attracted the attention of the whole world. The emergence of AI tools that can compose music, draw pictures, write essays or program has captivated many users.

However, on the other hand, the intelligence of AI tools also makes many people feel anxious, because they can completely transform people’s daily lives, from work, education to personal issues. rights and privacy.

A recent Reuters/Ipsos survey in the US showed that 61% of respondents expressed concern that AI could threaten the future of humans.

This concern has prompted governments to step up efforts to strengthen AI regulation. In the US, the government has drafted guidelines for the use and development of AI, and confirmed it will use legal tools to combat the dangers associated with this technology.

Các nước chạy đua quản lý AI - Ảnh 1.

Artificial intelligence is gradually becoming an important part of the world. (Illustration image – Photo: sharda)

In China, the cyber regulator also published a draft of the management of AI services, forcing companies to submit security assessments to the authorities, before deploying services and products. products to the market.

Europe accelerates efforts to manage artificial intelligence

The strongest effort of all is taking place in the European Union (EU), which is aiming to create the world’s first legislation governing AI.

On June 14, the European Parliament (EP) voted to approve the main directions in the draft law governing artificial intelligence (AI), proposed by the European Commission.

Accordingly, all AI-generated content will have to be flagged and applications will be classified according to their level of risk. Companies that want to provide AI applications in the EU will have to comply with strict requirements and have risk management measures for their products, or face fines of up to 6% of total sales. year.

After the vote, lawmakers began detailed discussions with EU member states. If an agreement is reached by the end of this year, the EU will have the world’s first law on AI regulation next year and is expected to come into force from 2026.

Controversial points in the draft European law

Europe has always been one of the leading regions in establishing regulations governing technology areas, such as data protection or social networks. However, with a rapidly evolving technology like AI, even faster than the rate at which legislators craft regulations, the challenge will be much greater.

Despite having made an important step forward, Europe’s effort to develop regulations governing AI technology remains fraught with hurdles. Many points are not clear and legislators still have differences of opinion on many issues.

The details that are creating strong debate are mainly related to the protection of individual privacy and copyright, such as whether to allow artificial intelligence to track an individual’s movements in public. China is doing the same, but for the European Union, doing so is a violation of privacy. Or whether or not to allow artificial intelligence to recognize emotions, after Denmark used artificial intelligence to analyze voice to determine if the caller to the emergency room showed signs of imminent cardiac arrest.

Another controversial detail is to what extent artificial intelligence is allowed to exploit information so as not to create illegal or pirated content.

The tech world favors the management of artificial intelligence tools

It can be seen that the issues being debated by European authorities will greatly affect the development of artificial intelligence.

Tech firms often tend to oppose tightening controls. However, this time, the opinion of the technology community in general has changed due to overwhelming concerns about the risks AI can bring.

Concerns about the risks if AI is improperly developed have been voiced by hundreds of scientists and technologists.

In March, an open letter from the tech world, signed by billionaire Elon Musk and Apple co-founder Steve Wozniak, urged businesses to “pause” the development of new AI models for half a year. years to review the risks.

From large technology corporations such as Microsoft, Google, to OpenAI – the company that creates the ChatGPT application, all have voiced their support for the government’s move to tighten management of the AI ​​technology field. .

Các nước chạy đua quản lý AI - Ảnh 2.

Generating AI receives the attention of a large number of users around the world. (Photo: Reuters)

OpenAI CEO Sam Altman also called for cooperation between countries, including a proposal to create an international watchdog in this area.

Concerns about excessive technological control

However, despite agreeing that regulatory oversight is needed, many businesses also expressed concern that excessively tight control measures could hinder the development of technology.

Many regulatory laws can stifle technology development. This is the opinion of several developers of artificial intelligence applications, including Professor Rasmus Rothe. As a longtime AI researcher, especially in the medical field, Mr. Rasmus is concerned that focusing too much on the potential risks could cost us the opportunity to access the benefits of the technology. this brings.

“AI good or bad depends on the intended use. AI can create wars on the Internet, but can also be used to screen cancer cells. The laws are too tight control from the government. Government can make small start-ups oppressive and difficult to develop AI technology,” said Professor Rasmus Rothe, Co-Founder of technology company Merantix.

According to Professor Rasmus, an important part of AI advancements comes from small companies, which have more difficulty than large tech corporations in meeting government regulations, due to limited resources. Therefore, the making of regulations should be conducted with caution.

“Tightening regulations will create instability – the worst thing for innovation. I’m not against the introduction of regulations, but they have to be extremely clear and transparent. The current process of developing regulations has not yet shown that,” said Professor Rasmus Rothe, Co-founder of technology company Merantix.

In fact, the negotiation process to develop rules on social media and 5G telecommunications technology has seen sharp disagreements on how to approach. With artificial intelligence – a technology that is constantly evolving, governments and businesses are expected to face even more difficulties in finding a common voice.

The balance between management and creative promotion

Enterprises are still wondering about the appropriate way and level of management for AI technology. So what solutions do European authorities have to ensure that the new regulations will not affect innovation and technology?

MPs pointed to the fact that up to half of artificial intelligence startups in Europe are considering moving to another country, if the rules are too strict.

Thus, it is necessary to limit the downside of artificial intelligence, but still not stifle innovation and keep companies in Europe. The problem is that this technology develops so quickly, it is impossible to know what will appear in the near future. Therefore, the European draft law focuses on the purpose, not allowing the development of technology to serve purposes that are harmful to the community. The border between harmful and harmless is not always clear, which is the difficulty in formulating this law. The process of drafting the bill will take a long time, as the European Commission will have to consult each member state here.

It can be seen that, even with support from foreign authorities, experts and businesses, the management of artificial intelligence technology is still not easy. The balance between management, limiting risks, but still creating enough space for businesses to innovate and develop will be what governments need to ensure. Close international cooperation is also needed to accelerate this process.

Share the news now

Source : Genk