- Tram Ho
Over the past few years, technologist and researcher Bruce Schneier has been studying the hacking potential of social systems, namely financial markets, legislation, and tax codes. And that’s when he thought about the potential consequences of artificial intelligence for our society: whether AI systems themselves could evolve to the point where inadvertently – and automatically – harm other systems. social system or not?
” That’s when AI becomes a hacker, instead of hackers hacking AI systems ” – he said
Schneier will discuss his research this coming Monday at the RSA Conference 2021, which will be held online. The AI theme is based on a recent essay written by Bruce for the Responsible AI Use Project at the Belfer Center for Science and International Relations at Harvard’s Kennedy School.
The core question that Schneier asked was: what if AI systems could hack social, economic, and political systems on a scale, speed, and range of computing that humans can’t keep up with? when discovered and then receive all the consequences?
That’s when the AI entered a ” creative process of finding hacks “.
” They’ve been doing it in software, looking for holes in computer code. They don’t do it very well, but gradually get better while humans stay put ” in terms of capabilities. vulnerability detection, says Schneier.
He predicts that in less than a decade, AI will be able to “beat” humans in “stealth” hacking competitions, as evidenced by the 2016 DEFCON competition, a team named Mayhem consisting of only All the AIs beat every human team to make it to the finals. That’s because AI technology will inevitably evolve and exceed human capabilities.
Schneier says that his concern is not so much that AI “breaks into” systems, but rather that AI creates its own solutions. ” AI finds a flaw and an intervention solution, and humans will use it as a way to make money, like venture funds in the financial sector .”
The irony here is that AI was created by human programming, but they do not have the cognitive functions of humans, such as empathy or the will to know when to do something. Schneier emphasized that while much research has been done to incorporate contextual, ethical, and value awareness into AI programs, today’s AI systems simply don’t have that functionality.
Even so, humans will be able to take advantage of AI to find holes in the tax code, as a large accounting firm often does to find a ” new way to avoid taxes to sell to customers “. As a result, financial companies are unlikely to want to program additional rules that affect their ability to monetize AI.
The biggest risk here is that AI will find a way around the law without humans knowing – ” AI will find some way to interfere with the regulations and we won’t realize it ” – Schneier said. .
Schneier cites the Volkswagen scandal of 2015, when the carmaker was found to have cheated on emissions tests of its vehicles after engineers programmed the engine system. vehicle function to activate the emission restriction function during testing and not during normal operation.
” It’s a human breaking the rules, ” he said, not an AI, but an example of how AI can cheat in a system if “released” to self-learning within that system.
In his essay, ” The Rise of AI Hackers “, Schneier describes it this way: ” If I were to ask you to design a software that controls the car engine to maximize performance while still surpassing the In an emissions control test, you wouldn’t design software capable of cheating without being aware that you were cheating.This simply isn’t true for an AI; it doesn’t understand the abstraction. It will think outside the box simply because it has no concept of a framework, or of the limits of available human solutions, or of ethics, it won’t understand that Volkswagen’s solution will harm others, that it is distorting the meaning of the emissions test, or that it is breaking the law. ”
Researcher Bruce Schneier
Schneier admits the concept of AI as a hacker is ” very mythical ” at the moment, but it is a problem that needs to be solved.
” We need to think about this, ” he said. ” And I’m not sure you can stop that. The likelihood of an AI being a hacker depends a lot on the question: how do we put the rules of the system into the code? ”
The key here is to leverage the power of AI in defense, like finding and fixing any vulnerabilities in a piece of software before it’s released.
” Then we will live in a world where software vulnerabilities are a thing of the past, ” he said.
The flip side is that the transition will be very difficult: outdated or previously released code may be at risk of being attacked by AI tools controlled by bad guys.
That risk is that AI systems will attack other AI systems in the future, and humans will collapse, he said.
Schneier’s latest AI research is a continuation of his own research on how to apply hacker thinking and skills to protect social systems, presented at the RSA Conference 2020 in San Francisco. What he calls “social hacking” means that ethical hackers will help fix loopholes – for example in US tax codes and laws to avoid dangers that arise by accident or deliberate.
Schneier’s big idea boils down to one question: ” Can we hack society and contribute to the security of the systems that make it up? ”
Also, don’t forget to keep an eye out for AI getting involved in social hacking.
” Computers are much faster than humans. A human-run process can take months or years, but only days, hours, or even seconds. What happens when you give it a go? AI the entire US tax code and order it to find every possible way to minimize the tax owed? ” – he wrote in the essay.
Source : Genk