In the summer of 2015, a group of hackers tried to take control of an unmanned military helicopter called "Little Bird". The helicopter has a similar design to the regular manned version used in special military missions of the US military, stationed at a Boeing facility in Arizona. The hackers have a good start: At the time of deployment, they were able to access part of the computer system of the unmanned aircraft. On that basis, all they need to do is break into the computer with the Little Bird's flight control panel and the plane will fall into their hands.
When this project kicked off, a group of hackers called "Red Teams" were hired to perform the intrusion. Initially they were able to rob a plane straight up as easily as just "stealing" Wi-Fi at your home. But engineers from the Pentagon's Advanced Defense Projects Agency (DARPA) have deployed a new type of security mechanism, a software system that cannot be requisitioned. The essential parts of the "Little Bird" computer system will not be compromised by the technology available at the time, its code is as reliable as a mathematical proof. The subsequent "Red Team" despite being given a six-week period and adding access to the aircraft's computer network was far more likely to fail to unlock Little Bird's defenses.
Kathleen Fisher, Tufts University computer science lecturer and manager of the High-Assurance Cyber Military Systems (HACMS) project, said: “They were unable to undermine and interfere with any operation. how. This result causes everyone at DARPA to wake up and utter astonishment, we can really put this technology into the systems that we need to protect. ”
The technology that repels hackers is a type of software programming known as a formal verification. Different from most code, written in an unorthodox and valuable way based on whether it can run or not, the software tested in this way is as clear as a mathematical proof: Each sentence logically commands the previous statement. The whole program can be tested with certainty in the same way that mathematicians prove theorems.
Bryan Parno, a researcher on formal verification and security at Microsoft, said: "You are writing a mathematical formula that describes the behavior of the program and uses the proofing tool to check the validity of that command. "The desire to create software that is checked for parity has been around for a long time almost at the same time as computer science. For a long time, it was almost impossible to reach hopelessly. But advances in the past decade in "formal methods" have taken this approach closer to mainstream research. Today, this method is being studied in associated universities, the US military and technology companies like Microsoft and Amazon. Concern is growing due to the increase in the number of online transactions used in essential social activities. In the past, when isolated and isolated computers between households and offices, bugs were just a bit inconvenient, today only a small error is needed when the code has opened up immensely. Security holes on the computer network, allowing anyone with knowledge of the field to wear freely in the computer system.
Andrew Appel, a professor of computer technology at Princeton University and a leading expert in software testing, said, "In the 20th century, if a program has a bug, it can stop working and only yes. But in the 21st century, only a bug can create a way to allow hackers to take control of the program and steal all your data. From a bug that is not good but can accept it temporarily has become a much worse hole. ”
Dream of perfect shows
In October 1973, Edsger Dijkstra appeared with an idea for creating error-free code. While staying in a hotel at a seminar, he suddenly came up with the idea of making "math" programming more. He described in his autobiography, "My brain was burning, I climbed into bed at 2:30 am and wrote continuously for over an hour." This was the starting point for his 1976 book "A." The Discipline of Programming, ”along with Tony Hoare's work, who also like Dijkstra received the Turing Prize, establishing a new vision for incorporating authentic evidence into writing programs.
Kathleen Fisher It is not a vision that computer science can follow, largely because after many years it has been found to be impractical or more frankly said to be impossible to concretize. a function by using the principles of formal logic.
Technical specifications are a way to determine exactly what a computer program does. And peer testing is a way to prove there is no doubt that a program's code has achieved that technical characteristic. To understand the principle of this, imagine you write a computer program for a robot car to take you to a grocery store. At the operational level, you will determine the movement of the vehicle at the time of operation to make a journey. It can turn left or right, brake or accelerate, turn on or off at the end of the journey. Your program will then be a set of basic processes that are streamlined so that eventually you will reach the grocery store, not the airport.
The traditional and simple way to see if a program is running is to test it. Coders incorporate their programs into a series of unit testing elements that run as designed. If your program is a control algorithm that routes a robot car, for example, you'll have to test it at many points. This test method helps software products to be made to operate smoothly. But unit testing cannot guarantee that the software will always run correctly because there is no way to run a program through all inputs. Even if the algorithm runs successfully with every destination you test, there is always a chance it will stop working in rare cases and open up a security gap. In real programs, problems can be as simple as buffer overflows, when a program overwrites more data and overwrites a small portion of the computer's memory. It is a seemingly harmless error but difficult to remove and can enable hackers to attack the system.
Technical features require more complex than a trip to a grocery store. A software engineer may want to write a notarization program and leave a timestamp on the document in the order received. In this case the specification needs to explain that the counter always increases so that the following receive document always has a larger number than the previous one and the program will never leak the keys it uses to "sign" on documents.
This is too simple to hear, but it is more difficult to make it a standard language for computers to apply.
Parno said: “Creating a technical feature that machine readable is generally extremely difficult. Saying 'can't let my password leak' is too easy but turning it into a mathematical definition will definitely require you to think. ”
Another example, for a program that arranges a sequence of numbers. An engineer trying to characterize a sorting program might think of things like:
For each j value in the list, make sure the condition j ≤ j + 1
This specification ensures that every value in the list is less than or equal to the value following it, but there is a bug here. Programming engineers assume that the output will be a permutation of the input. Specifically, if they give [7, 3, 5] they expect that the program will return [3, 5, 7] and satisfy the definition. But [1, 2] is also satisfying because Parno said "it is a sort list, not a sort list, but we hope it is."
In other words, to convey an idea that you have about what a program should do on a technical feature, eliminating all possible interpretations (but not satisfied) about the function of the program, is extremely difficult. And the example above is an example of a very simple sorting software. Now imagine if it is something more abstract than a lot of arrangements, such as securing a password. He added: "What does it mean in math? Defining it may need to write a mathematical description of what it means to keep secrecy or secure an encrypted algorithm. These are the questions that we and many others are looking for the correct answer but the chances are very fragile. ”
Block-Based Security (Block-Based Security)
When writing these technical features along with the accompanying notes for the program to run, a program including peer-reviewed information can be five times as long as a normal program even though both return the same results.
This obstacle can be eliminated somewhat by using tools such as programming languages and programs designed to help software engineers create "anti-bomb" code. But in the 1970s these concepts did not exist. "There are many parts of science and technology that are not developed enough to successfully implement that idea, so around the 1980s," said Appel, lead researcher for DeepSpec, an expert computer system development team. Researchers lose interest and interest in this issue.
Even if these tools are improved, another obstacle appears: No one can be sure whether it is necessary or not. While those excited about verification methods talk about small code errors but describe it as serious dangerous bugs, people look around and see computer programs still running. is fine. It is true that sometimes they stop working suddenly and make us a bit of unsaved work and sometimes have to restart but it doesn't sound too horrible to use the measures This test method. At that time, even the software testing champions began to question its use and usefulness. In the 1990s, Hoare's father "Hoare logic", one of the theoretical systems about the accuracy of the first computer program, admitted that this was a solution that required too much specialized labor. deep to solve a "nonexistent" problem. He wrote:
“Ten years ago, formal method researchers (and I was the most misguided of them) predicted that programming would include all the utilities and help that the test promised. The appointment will bring … But the reality is that the world does not have to suffer the heavy effects of the problems that our research wants to solve when it is started. "
Then the internet emerged, what it caused for coding errors similar to what moving by air had an impact on the spread of infectious diseases. When each computer is connected, small but acceptable errors can lead to a series of security errors.
Appel said: "This is something we didn't really understand. That is there are some types of software that will face all hackers on the network. So just need a bug in that software, it can easily become a security hole.
When researchers began to understand the dangers of computer security caused by the internet, testing programs was ready to come back. To begin with, researchers have made great strides in technology that reinforces formal methods: improvements in support programs such as Coq and Isabelle; the development of new logical systems (called category-dependent hypotheses) providing frameworks for theoretical computing; and developments in "Operational Semantics" – in essence, a language has the right words to represent what a program needs to do.
Jeannette Wing, vice president of Microsoft Research, said: "If you start with features written in English, you already start with an obscure trait. Any natural language is very vague. In formal testing, you will write down a precise, math-based trait to explain what you want the program to do. "
In addition, researchers of formal methods also moderate their goals. In the 1970s and early 1980s, they envisioned a comprehensive inspection computer system, from electrical circuits to software. Today most formal method researchers focus on smaller parts of the system but are more important and vulnerable to attack, such as operating systems or encryption protocols.
“I have not claimed that we will prove that the entire system is accurate, 100% reliable until every bit, every circuit. It would be silly to say so. We know better about what we can and cannot do. ”
The HACMS project demonstrates that it is possible to create security guarantees for the computer system by reinforcing a small and specific part. The original goal of the project was to create a non-invasive quadcopter. The built-in software that operates the quadcopter is a monolithic type, meaning that if hackers break into a part, he will take over. Therefore, within two years, HACMS has established the division of code on the computer to control the mission of the aircraft into partitions.
HACMS also rewrites the architecture of the software using what Fisher, the project founder, called "highly secure building blocks" – tools that allow developers to demonstrate degree true of code. One of the tested building blocks comes with a demonstration, ensuring that if someone can access a partition, they cannot continue to penetrate further partitions.
Later, HACMS programmers installed the partitioned software on Little Bird. In the test against the "Red Team" hackers, they gave these hackers access to a partition that controls parts of the unmanned helicopters like cameras but not an essential part. They make sure that mathematicians will not be able to find a way to penetrate. “They prove that in terms of machines these hackers cannot break and break out of this partition. It runs fairly stable in theory but re-testing is necessary. ”
In the same year, DARPA applied tools and technology from the HACMS project to other areas of military technology, including satellites and self-driving escort trucks. Subsequent initiatives are consistent with the way that formal testing has been widespread in the past decade. Each successful project encourages the next projects.
Internet Verification (Verifying the Internet)
Security and reliability are two main objectives, motivating formal methods. And with each passing day, the need to improve both of these criteria is increasingly clear. In 2014, a small error in the code, which could have been caught by this method, paved the way for the Heartbleed bug, threatening to knock down the internet. A year later a pair of hackers made us experience the greatest terror, with internet-connected cars, when they gained control of a Jeep Cherokee.
The danger is increasing, researchers are also more ambitious. In response to the optimistic spirit of studies in the 1970s, the DeepSpec project led by Appel (who also participates in HACMS) also attempts to build a system similar to a web server. If successful, it will help link successes before and after during the past decade. At Microsoft, software engineers are also having two ambitious projects. One is Everest project to study HTTPS-accredited versions, web browser protection protocols and Wing calls it "The Achilles heel of the Internet".
The second is the project to create technical features that are tested for complex high-tech systems like drones (unmanned helicopters). This challenge can be considered. While typical software follows single, clear steps, the drones control programs use learning to make decisions based on data from the environment. Although it is difficult to reason and put it into practice, during the past few years formal methods have made significant progress and Wing is optimistic that researchers of these methods will soon find a solution.