Deep understanding of artificial intelligence & Machine Learning at Apple

admin

PART 1

Most likely, the most accurate measure of Machine Learning ‘s progress at Apple comes from the most important acquisition of AI ever, Siri. The origin of Siri is the ambitious DARPA program involving smart assistants. Later, a number of Scientists established their own companies, using the DARPA program to develop them into applications. Steve Jobs persuaded the founders to sell DARPA to Apple in 2010 and directly put Siri in the operating system. Then, Siri’s debut screen is the highlight of the event announced iPhone 4S (10/2011). Now, Siri’s way of operation is so advanced that users do not need to activate it with the home button or even need to say “Hey, Siri” (a feature that utilizes Machine Learning technology, helps iPhone listen be informed without wasting battery power. Siri’s intelligence has been integrated into Apple Brain and self-operated even though users do not directly open commands.

Eddy Cute points out that four core elements of Siri are: voice recognition (to understand when you talk to Siri), the ability to understand natural language (to capture what you’re saying), execution (to perform queries or requests) and feedback (to reply back to you). ” Machine Learning has a significant impact on all these factors,” said Eddy Cue.

Tom Gruber – Head of Siri Advanced Development (pictured above) and Alex Acero – Voice Research Specialist (pictured below)

Tom Gruber – who came to Apple through the affirmation (the co-founders of Gruber left after 2011) said that even before Apple applied neural nets to Siri, Apple’s user base scale was provide data source to “train” the nets later. “Steve Jobs said that I was having a big turn, from a leader, an application to hundreds of millions of users without even owning a beta program. So, you suddenly own a lot of users. Users will tell you how people talk about what’s right for your application. This is the first development. And then neural networks have appeared ”

The transition to neural net that handles Siri voice recognition is done when many AI experts join Apple, including Alex Acero. Acero began his career with voice recognition at Apple in the early 90s and later spent years working at Microsoft Research. “I love it very much and have published many articles. But when Siri was born, I realized: This is an opportunity to turn the deep neural networks into reality, not what hundreds of people will read, but something that millions of people use. ” In other words, Alex Acero is a scientist that Apple is looking for – optimizing products rather than focusing on publishing.

When Acero came to the “apple house” three years ago, Apple is still licensing most of Siri’s voice technology from third parties. Federighi says this is a pattern that Apple studies constantly. “When a technology sector makes an important contribution to the product for a long time, we will build an in-house team to convey the experience that the user wants. In order to create a great product, we want to own and innovate it further with internal resources. Speech is an exceptional example when we apply external resources available, thus achieving good results from the beginning. “

From here, the team started training neural net to replace Siri’s original. “We have the GPU (graphics processing unit microprocessor – a dedicated microprocessor to receive the task of acceleration, graphics processing for CPU central processor) the largest and worst … And a lot of data”. Siri’s debut on 7/2014 showed that all activities were not in vain.

“The error rate of Siri decreases … Mostly thanks to deep learning and how we optimize deep learning – not only because of the algorithm itself but also the context of the end-to-end product”

Talking about “end-to-end”, Apple is not the first company to use DNNs in voice recognition. But by controlling the entire delivery system, Apple has its own advantages. Because Apple created its own chips, Acero can work directly with the silicon design team and the firmware writers for devices to optimize the performance of neural net. Siri’s needs even affect the iPhone’s design.

“Not just silicon,” Federighi said – “That’s the number of microphones we put in the device, where we put microphones.” The way we adjust hardware, those mics and software stack handles audio. They become intertwined puzzle pieces – an incredible advantage compared to businesses that have to build some software, and just sit and watch what will happen “.

Another aspect: When an Apple neural net works in one product, it can become a core technology used for other purposes. Therefore, when Machine Learning supports Siri to understand users, Machine Learning has become a tool for handling spelling instead of typing. As a result, users recognize their messages and emails more coherently if they don’t use a soft keyboard; Gradually, users will click on the microphone key and talk more.

The second element of Siri that Eddy Cue mentioned is the ability to understand natural language (natural language understanding). Siri started using Machine Learning to understand users’ expectations since November 2014 and released a deep-learning version more than a year later. The moment Siri owns the voice recognition feature, Machine Learning has experience in translating more flexible command lines. Eddy took out his iPhone, activated Siri as an example. “Please open the Square Cash application, send to Jane 20 dollars”. The screen will now reflect all of Cue’s requests. Eddy Cue tried again, but changed the language a bit. “Shoot 20 bucks to my wife.” The results are still the same.

Without advances in Siri, Apple will not be able to constantly launch Apple TV – the device that features advanced voice management. While previous versions of Siri require you to speak in a limited way, the supercharged-thanks-deep-learning version not only offers specialized options from a wide variety of movie catalogs and songs, but also resolves concepts like: Give me a horror movie with Tom Hanks (If Siri is really smart, it will return The Da Vinci Code result). “Before the supercharged technology-thanks-deep-learning, you will not be able to offer this feature” FEDERIGHI mentioned.

With iOS 10, released in the fall of this year, Siri’s voice has become the last of the four elements that were transformed by machine learning . In essence, Siri’s imprint comes from the database of recordings collected at the voice center; Each sentence is a patchwork of these recordings. According to Gruber, Machine Learning makes everything smoother and makes Siri sound more realistic.

Acero made the first demo version similar to Siri’s voice with familiar robot elements. This version will ask you in a charming, fluent voice: “Hi, what can I do for you?”. What is the difference here? Main deep learning!

Although only a small detail, but owning a more natural voice, Siri can bring big differences. “People will feel more confident if the voice quality is higher. The more the voice attracts users, the more users use it, contributing to the return effect (return effect)

The use of Siri as well as making improvements on Machine Learning of Apple more and more meaningful when in the end, Apple also opened Siri for programmers. However, for experts, this process still occurs too late because the number of 3rd party Siri partners that Apple owns only a few dozen, while Amazon’s Alexa has more than 1000 “skills” are provided. by external programmers. Apple said that this did not last long because Amazon users had to use their own language to access those skills. According to Apple, Siri will integrate things like SquareCash or Uber more naturally (another competitor, Viv – created by a co-founder of Siri – also promises a tight integration process, though not announced yet). specific launch)

At the same time, Apple also announced that Siri’s improvements are gradually making a difference thanks to new features or getting better results from familiar queries. “The number of requests is still increasing and increasing. I think Apple is doing a better job of communicating everything we do. For example, I like sports and you can ask Siri who he thinks will win the game, it will give you an answer. I don’t even know that Apple can do that anymore! ”- Eddy Cue shared.

Probably the biggest problem when Apple accepts Machine Learning is the way to achieve success despite the commitment to user security principles. Apple has encrypted users information so no one, including Apple lawyers, can read it (including the FBI despite a search warrant from the court). Apple also does not collect user information for advertising purposes.

Of course, from a user perspective, it is commendable, but Apple’s over-seriousness with this security issue has not been effective in drawing AI talent back to the company. “All the Machine Learning experts want is data,” said an old Apple employee, currently working for the AI ​​company. “But because the security stance Apple often silently do everything. You may be wondering if that is the right thing, but it also makes Apple famous for not being the AI ​​technology racer. “

This view is widely debated by Apple executives because they think that it is still possible to get all the data and take advantage of the Machine Learning tool without having to keep users’ personal information on cloud, it doesn’t even need to be saved because of the user to train neural nets.

There are 2 issues here. First, it is the processing of personal information in Machine Learning- based systems. When detailed information about a user is gleaned through a neural-net processing system, what happens to that information? Second, collecting information requires training of behavioral neural-nets. But how to do that without collecting users’ personal information?

Apple has the answer to both issues. “Some people are aware that we can’t do these things with AI because we don’t have data. But we ‘ve found a way to get the data we need but still keep it secure. That is the key point ”

Apple has solved the first problem – protecting the interests and personal information that neural nets have identified – by taking advantage of both proprietary software and hardware management capabilities. Simply put, it is Apple Brain. “We retain some of the most sensitive information when Machine Learning scans the entire device,” Federighi said. For example, Federighi cites those applications, icons that appear when you scan to the right, which are the ones you plan to open next. Such predictions are formed based on many factors and many of them relate to user behavior. According to Federighi, 90% of the time people use to find what they want thanks to suggestions.

Other information Apple stores on devices may be the most personal data Apple has collected: words that users type with the iPhone QuickType standard keyboard. Thanks to the neural network system that tracks the time you type, Apple will detect key events and items such as flight information, contacts and appointments – but that information is in your phone. Even in backups stored on Apple’s cloud, information is filtered so that backups can’t be touched. “We don’t want that information stored in Apple’s servers. An organization like Apple has no need to learn about your habits or the place you will be visiting. ”

Apple also tries to reduce the amount of information retained. Federighi mentions the following example: when you have a conversation and someone talks about a term that can become a potential search keyword, other businesses will have to analyze the entire conversation in the cloud. to identify those terms but an Apple device can identify them without having to take the data out of user ownership. The Apple system will constantly search for matches according to the knowledge base in the phone (part of the “brain” 200 megabytes)

“This operation is very compact but is carried out throughout knowledge basics, with hundreds of thousands of locations and entities….” All Apple applications use a knowledge base, like the Spotlight, Maps search app and Safari. The Knowledge base also supports auto-correct.

But will Apple’s strict security rules interfere with neural net algorithms – that is the second issue mentioned earlier. Neural nets need a large amount of data to be fully and accurately trained. If Apple does not interfere with the behavior of all users, how to get that data? As many other companies have done, Apple trains its nets on publicly available information sets (such as information sets containing stock images for image recognition). But sometimes, it needs more or more up-to-date information, which can only come from user base. Apple tried to get this information without knowing who it was; Apple hides data, attaches it to random identifiers that are not linked to Apple IDs.

Starting with iOS 10, Apple plans to select a new technology called Differential Privacy . This technology relies on the information of the crowd and does not identify any individuals. For example, Diffential Privacy will find the latest popular keywords not included in Apple’s knowledge base or Apple dictionary, which often appear unexpectedly based on many answers regarding queries or use of emojis. that is high. “The traditional way that techies solve this problem is to send every word, every letter you type on the servers, then review all and find interesting information” – Federighi presented . “By implementing end-to-end encryption, we will not follow that traditional way.” Although Differential Privacy was developed in the research community, Apple is gradually adopting it on a large scale. “We are conducting surveys on 1 billion users” – Eddy Cue shared.

“We have started researching this technology many years ago and have completed many good, practical products. Its security is amazing. ”- Federighi (he then described a system that involved encrypted protocols and launched virtual coins that I couldn’t keep up with. Basically, this technology related to adding math tools to certain parts of the data so that Apple can detect patterns usage even without identifying individual users). He also mentioned that Apple’s contribution is very valuable to the world because it allows the Scientists to work with the implementation process, from which to publish articles about personal works.

Obviously, Machine Learning has changed Apple products but is Machine Learning changing Apple ?. In some ways, Machine Learning’s mindset seems to be different from Apple’s old practices. Apple is a company that carefully controls user experience, including sensors that predict scan activity of users. Everything was pre-designed and built with code correctly. But when engineers applied Machine Learning , they had to back off, so that the software itself discovered the solutions. Will Apple accept the fact that Machine Learning systems can access product design?

“Product design is the source of many internal debates. We often bring about planned and thoughtful experiences, from which we can manage all directions that the system is planning to interact with users. When you start training a system based on a large amount of user behavior data, [results appear] are not necessarily what an Apple designer assigns. They are what appear from data. ”

But with Schiller, “Although these technologies have a great impact on the design, in the end we are the ones who use these technologies because they can help us deliver a better quality product.”

And here is the conclusion: Apple may not be clear about what they are with Machine Learning but the company will take advantage of Machine Learning as much as possible to improve their products. The evidence is the “brain” inside your phone.

“Ordinary customers are experiencing deep learning every day [and this is an example] showing why you love an Apple product. [The most interesting example] is deep learning so clever that you don’t even know about it until the third time you see it, you will stop and ask yourself: What is happening ? ”

Source: Techtalk via Backchannel

Share the news now

Source : Backchannel