Google research chief: 'Emergent artificial intelligence? Hogwash!'
'We have to make it happen'
Posted in Science, 17th May 2013 18:25 GMT
Free whitepaper – European migration survey
Google I/O If there's any company in the world that can bring true artificial intelligence into being, it's Google, but the company thinks SkyNet is unlikley to appear in the Googlenet without help from the Chocolate Factory.
Though many science fiction writers and even some academics have put faith in the idea of emergent artificial intelligence – that is, the appearance of an entity with a sense of its own identity and agency within a sufficiently complex system – Google's head of research Alfred Spector told The Register he thinks it's unlikely that such a system could develop on its own – even in the planet-spanning million-server Googlenet.
"[AI] just happens on its own? I'm too practical – we have to make it happen," Spector told The Register in a chat after a presentation by Google researchers at Google I/O on Thursday. "It's hard enough to make it happen at this stage."
Spector is the Oppenheimer to Google's various Manhattan Projects, and it's his job to shepherd the company's dedicated research team toward various ambitious goals, whether those be designing machine learning tools for automatically classifying web content, designing wearable computers like Google Glass, or coming up with radical new approaches to performance isolation on shared compute clusters.
One of the overarching projects that ties all these together is the development of a set of artificial intelligence systems that use machine learning techniques to automatically classify and deal with vast amounts of web information.
Google's approach toward artificial intelligence embodies a new way of designing and running complex systems. Rather than create a monolithic entity with its own modules for reasoning about certain inputs and developing hypotheses that let it bootstrap its own intelligence into higher and higher abstractions away from base inputs, as other AI researchers did through much of the 60s and 70s*, Google has instead taken a modular approach.
"We have the knowledge graph, [the] ability to parse natural language, neural network tech [and] enormous opportunities to gain feedback from users," Spector said in an earlier speech at Google IO. "If we combine all these things together with humans in the loop continually providing feedback our systems become ... intelligent."
Spector calls this his "combination hypothesis", and though Google is not there yet – SkyNet does not exist – you can see the first green buds of systems that have the appearance of independent intelligence via some of the company's user-predictive technologies such as Google Now, the new Maps and, of course, the way it filters search results according to individual identity. The company is plugging more money into its AI endeavors, and hired Singularity-obsessed AI-booster Ray Kurzweil in December to help run its AI and machine learning schemes.
Another company pioneering this approach is IBM, whose Watson tech famously took on and beat Jeopardy champions. Watson is now being tested within hospitals, where the system's ability to rapidly synthesize large quantities of information and generate hypotheses in response to questions has – IBM hopes – great promise for diagnostic healthcare. Spector used to work at IBM where he built some of the systems that sit inside Watson.
"I don't think it's fundamentally different," he said. "[IBM] started with Jeopardy, we're starting with a distribution of queries that we think is valuable to users, but I think both systems are relying on similar concepts: information, machine learning, reinforcement learning. They're both very similar, both very valuable."
But it's that last phrase – reinforcement learning – which is why this vulture believes Google has the greatest chance of effectively designing AI systems. Because Google operates the most widely used search engine in the world, and has hundreds of millions of Gmail, YouTube, and Android users as well, the company has a profound advantage when tuning its artificial intelligence approaches in response to people. It's as though every user of Google services is checking and rechecking Google's AI techniques, correcting the search company when it gets something wrong, and performing an action when it gets it right.
This means that although Google thinks that it's unlikely a full SkyNet-style "emergent" AI could spring forth from its technology, its AI approaches will have some of the same characteristics of these systems. "There will be emergent intelligence in the sense that you will be surprised," Spector said, referencing Google Now's ability to pre-preemptively suggest transport routes home when the system assumes you have finished the working day, and so on.
But there's one factor which Google has not yet perfected, and it is the human mind's ability to be selectively forgetful. "I've always believed there's a certain amount of randomness that generates what we think of as creativity." Putting this randomness in could be the next step – but Google is keeping quiet on that part, for now. ®
* Bootnote:
This approach was a spectacular failure and led in part to the "AI Winter" period of scant funding and little progress that defined AI research in the 80s. This was followed by the rise of "expert systems" – that is, AI technologies with specific goals achieved through synthesis of vast amounts of domain-specific information, such as chess-champion Deep Blue. After expert systems came the period in which we now operate, in which things such as optical character recognition, machine learning, and Bayesian inference are all coming together to create systems with broader remits, but built out of a distributed set of components.
Free whitepaper – Getting Started with Bring Your Own Device (BYOD)