What is deep learning?

Linh Le

We look at the phenomenon behind some of today’s most advanced AI

Deep learning refers to a technique for creating artificial intelligence using a layered neural network, similar in design to the layout of the human brain.

It fits into a larger family of machine learning techniques that aim to teach a machine to analyse, rather than use predefined algorithms built for a specific task.

Deep learning is loosely based on the neocortex of the brain, arranging analytical nodes in a series of pathways for data to flow between, essentially connecting them in a net-like network of layered nodes.

The analytical power this method provides is helping to power futuristic technology such as driverless cars, helping them to recognise road signs, or to differentiate between objects that either block its paths, like other cars and pedestrians or ones it can ignore such as leaves and litter.

Deep learning models can achieve high levels of accuracy, sometimes exceeding human-level performance, and are usually trained by using a large set of labelled data and neural network architectures that contain many layers.

AI and its classifications

The concept of AI is older than computing itself has been a popular plot device as early as the 18th century. The technology has been widely portrayed in films as something for mankind to fear such as 2001: A Space Odessy or The Terminator. However we are now beginning to see real-life examples of the technology, that are thankfully not killer robots, that is growing rapidly and today we have companies developing AI-powered chatbots and applications.

In 2015, a company called Luka created a chatbot to memorialise a man called Roman Mazurenko who had passed away that same year. The “Roman” bot was fed on thousands of text messages and social media posts sent by Mazurenko over a four year period and used an algorithm to mimic his responses. This is known as ‘General AI’ which is the creation of artificial beings that think and act just as we do.

We can separate AI into two main categories, one being general AI and another is “Narrow AI” which is the application of General AI principles to carry out specific tasks like cybersecurity, has proved far more successful.

Narrow AI, while not as advanced as the human-like robots of our sci-fi dreams, has still enabled the creation of systems that replicate some degrees of human intelligence, primarily due to advances in what we know as machine learning.

Instead of machines only copying human actions with precoded routines, algorithms are used as a way to train systems to learn from the data they process.

For example, in the case of a system trying to identify a picture of a birthday balloon, a machine may be taught to use pre-defined routines, such as one to detect shapes, one to identify numbers, and another to analyse colours. In early machine learning models, the system would take these human-coded routines and develop algorithms to help it learn to identify objects correctly.

So how does deep learning fit in?

While this was certainly groundbreaking for the development of AI, flaws in the model quickly surfaced. The biggest issue was the use of predefined analysis routines, which required far too much human input along the way. There were also problems when it came to photos that were difficult to process, such as blurred faces or objects.

Models since have drawn on our understanding of the human brain, something that today is known as deep learning.

The term ‘deep’ refers to the construction of a layered neural network, resembling the mesh of interconnected neurons that sit within the brain. Unlike the brain, which acts like a 3D net where any one neuron is able to talk to any other within its vicinity, these artificial networks operate a tiered structure, with layer upon layer of connected paths to allow for data to flow. A technique called backpropagation adjusts the weight between the nodes in these networks to ensure an incoming data point leads to the right output.

Researchers wanted to recreate the brain’s sophisticated analysis process. Each layer is designed not only to analyse data, but also provide additional context each time. As the object passes through each layer, a more accurate picture and understanding of it becomes possible.

In the balloon example, the picture will be broken down into its constituent parts, whether that be its colouring, any numbering or lettering on its surface, the shape it holds, and whether it’s being held or flying through the air. Each part is then analysed by the first layer of neurons, a judgement is made, and its passed along to the next layer.

This could work particularly well in the fight against fraud. For example, a system could be designed to identify fraudulent account activity, involving neural networks that first take raw data, and then add contextual information as it passes through, such as transaction values and location data.

While some networks may have only a few layers, some programs, including Google’s AlphaGo – which managed to defeat a champion player of Chinese board game Go in 2016 – have hundreds. Naturally, this requires vast computational power, and although neural networks have always been an ambition for early AI pioneers, until recently it has remained impractical.

Deep learning today

Many of today’s most advanced machine learning systems use a neural network to process data. Recent successes in the driverless cars industry have been made possible because of deep learning, while the principles are also being deployed in the defence and aerospace sectors in order to identify objects from space.

While the potential of deep learning is vast, it has limitations when it comes to more human-like tasks. Deep learning excels at pattern recognition, like the complex but fixed rules of Go. But researchers point out the vast amount of training data required to teach a machine only a specific set of rules.

At the current stage of development, it does not appear possible for deep learning to perform the same elaborate, adaptive thought processes of humans, however the technology continues to evolve at quite a rate.

Share the news now

Source : http://www.itpro.co.uk