Inside the Artificial Brain That’s Remaking the Google Empire
- |
- 6:30 am |
- Permalink
It was one of the most tedious jobs on the internet. A team of Googlers would spend day after day staring at computer screens, scrutinizing tiny snippets of street photographs, asking themselves the same question over and over again: “Am I looking at an address or not?’ Click. Yes. Click. Yes. Click. No.
This was a critical part of building the company’s Google Maps service. Knowing the precise address of a building is really helpful information for mapmakers. But that didn’t make life any easier for those poor Googlers who had to figure out whether a string of numbers captured by Google’s roving Street View cars was a phone number, a graffiti tag, or a legitimate address.
Then, a few months ago, they were relieved of their agony, after some Google engineers trained the company’s machines to handle this thankless task. Traditionally, computers have muffed this advanced kind of image recognition, and Google finally cracked the problem with its new artificial intelligence system, known as Google Brain. With Brain, Google can now transcribe all of the addresses that Street View has captured in France in less than an hour.
Since its birth in the company’s secretive X Labs three years ago, the Google Brain has flourished inside the company, giving its army of software engineers a way to apply cutting-edge machine-learning algorithms to a growing array of problems. And in many ways, it seems likely to give Google an edge as it expands into new territory over the next decade, much in the way that its search algorithms and data center expertise helped build its massively successful advertising business during the last ten years.
“Google is not really a search company. It’s a machine-learning company,” says Matthew Zeiler, the CEO of visual search startup Clarifai, who worked on Google Brain during a pair of internships. He says that all of Google’s most-important projects—autonomous cars, advertising, Google Maps—stand to gain from this type of research. “Everything in the company is really driven by machine learning.”
In addition to the Google Maps work, there’s Android’s voice recognition software and Google+’s image search. But that’s just the beginning, according to Jeff Dean, one of primary thinkers behind the Brain project. He believes the Brain will help with the company’s search algorithms and boost Google Translate. “We now have probably 30 or 40 different teams at Google using our infrastructure,” says Dean. “Some in production ways, some are exploring it and comparing it to their existing systems, and generally getting pretty good results for a pretty broad set of problems.”The project is part of a much larger shift towards a new form of artificial intelligence called “deep learning.” Facebook is exploring similar work, and so is Microsoft, IBM, and others. But it seems that Google has pushed this technology further—at least for the moment.
AI as a Service
Google Brain—an internal codename, not anything official—started back in 2011, when Stanford’s Andrew Ng joined Google X, the company’s “moonshot” laboratory group, to experiment with deep learning. About a year later, Google had reduced Android’s voice recognition error rate by an astounding 25 percent. Soon the company began snatching up every deep learning expert it could find. Last year, Google hired Geoff Hinton, one of the world’s foremost deep-learning experts. And then in January, the company shelled out $400 million for DeepMind, a secretive deep learning company.
With deep learning, computer scientists build software models that simulate—to a certain extent—the learning model of the human brain. These models can then be trained on a mountain of new data, tweaked and eventually applied to brand new types of jobs. An image recognition model build for Google Image Search, for example, might also help out the Google Maps team. A text analysis model might help Google’s search engine, but it might be useful for Google+ too.
Google has made a handful of its AI models available on the corporate internet and Dean and his team have build the back-end software that lets Google’s army of servers number crunch the data and then present the results on a software dashboard that shows developers how well the AI code worked. “It looks like a nuclear reactor control panel,” says Dean.
With some projects— the Android voice work, for instance—Jeff Dean’s team needs to do some heavy lifting to make the the learning models work properly for the job at hand. But perhaps half of the teams now using the Google Brain software are simply downloading the source code, tweaking a configuration file, and then pointing Google Brain at their own data. “If you want to do leading edge research in this area and really advance the state-of-the-art in what kinds of models make sense for new kinds of problems, then you really do need a lot of years of training in machine learning,” says Dean. “But if you want to apply this stuff, and what you’re doing is a problem that’s somewhat similar to problems that have already been solved by a deep model, then…people have had pretty good success with that, without being deep learning experts.”
The New MapReduce
This form of internal code-sharing has already helped another cutting-edge Google technology called MapReduce catch fire. A decade ago, Dean was part of the team that built MapReduce as a way to harness Google’s tens of thousands of servers and train them on a single problem—indexing the world wide web, for example. The MapReduce code was eventually published internally and Google’s razor-sharp engineering staff figure out how to use train it on a whole new class of big data computing problems. The ideas behind MapReduce were eventually coded into an open-source project called Hadoop, which gave the rest of the world the number-crunching prowess that had once been the sole provenance of Google.
This may eventually happen with Google Brain too, as details of Google’s grand AI project trickle out. In January, the company published a paper on its Google Maps work, and given Google’s history of sharing its research work, more such publications are likely.
Given the breadth of the problems these deep learning algorithms solve, there’s a lot more for Google to do with Dean and his team’s code. They’ve also found that the models tend to become more accurate the more data they consume. That may be the next big goal for Google: building AI models that are based on billions of data points, not just millions. As Dean says: “We’re trying to push the next level of scalability in training really, really big models that are accurate.”