Opinion L.A.

Observations and provocations
from The Times' Opinion staff

« Previous Post | Opinion L.A. Home | Next Post »

Be afraid: Robot experts say machines are catching up

Sky Captain robot at Comic-Con 2004A panel of robotics experts at the South by Southwest conference couldn't reach a consensus Tuesday on whether machines would soon take over the Earth. But they agreed that the advances in artificial intelligence are well-nigh unstoppable. In fact, they said, machines are already demonstrating the ability to learn and to solve problems that humans couldn't.

The discussion was seasoned with more than a dash of whimsy, and like much of what goes on at South by Southwest, it was closer to a brainstorming session than an academic presentation. Nevertheless, the speakers offered enough anecdotal evidence to make their visions of the future compelling, if not necessarily prescient.

Two of the three -- Daniel Wilson and William Hertling -- are authors of novels about apocalyptic conflicts between machines and humans, so their comments need to be taken with a grain of salt. The third -- Chris Robson, founder of the data analysis firm Parametric Marketing -- wasn't promoting any books, just an (ahem) unconventional point of view about the nature of human consciousness.

The idea of a robot coup d'etat is based on the sci-fi notion of "technological singularity" -- the point where machines become powerful enough to improve their own instruction sets and capabilities without human intervention, leading to a runaway chain of self-upgrades that surpasses human comprehension. And Wilson, for one, is not a believer.

Smart machines like IBM's Watson supercomputer aren't the product of some magical breakthrough, but of a lot of separate research efforts that solved individual problems. Because progress toward singularity is iterative, Wilson said, "we'll have time to deal with potential unforeseen circumstances."

Hertling wasn't so sanguine. The rapidly advancing power of microchips means that machines with far fewer chips will be able to perform Watson-like feats. Combine that with some open-source artificial-intelligence software toolkits, he said, and hobbyists will join scientists in writing "crowdsourced" solutions to the technical problems involved.

Noting that cats have about 10% of the brainpower of the average human, Hertling offered the audience a milestone for anticipating the arrival of the singularity. "If you see a robotic cat wandering down the street, we're about 10 years from human level AI," he said, adding, "That's the point when I'm grabbing my kids and heading for the hills."

Robson went even further. Smart machines are already designing the chips that are paving the way for ever smarter machines. And there's already smart software online -- such as "tradebots" built to generate profits on Wall Street -- and self-replicating programs can interact with machines -- such as the Stuxnet virus that reportedly tracked down and attacked some of the centrifuges Iran was using to enrich uranium. Put those things together, Robson warned, and you could produce a tradebot that shorted airline stocks, then directed viruses to cause planes to crash.

But why assume that advancing machine intelligence will eventually lead to a Terminator-style future, where robots try to kill off their human overlords? Hertling said it's easy to see smart machines concluding that humans are a threat to the planet they live on, or that the warlike nature of man makes a preemptive strike a robot's best defense. His own fiction, however, finds a happier ending, with machines concluding that they need humans just to keep generating the power they need to operate.

A more interesting question is whether the increasing intelligence of machines and the simulation of personality (hello, Siri!) creates a moral imperative to treat them well. Robson went the furthest on that point, saying: "I know of no scientific reason why AI machines can't have the same kind of consciousness as we have... I see no scientific reason why machines can't suffer."

Wilson said that people are going to start interacting in their daily lives with machines that look increasingly like living creatures. Such machines "should have moral rights, but only in so far as it affects human beings." It's OK to destroy a toaster, he said, but harming a robot that looks like a puppy might breed sociopathic behavior.

As for the future, Wilson said, "There's no stopping this train we're on." The people who are building technology try to assure its safety, "but there's no way to cover all your bases there." Nevertheless, he argued that the best course is to keep building technology and continue expanding our own capabilities with it.

Robson, on the other hand, said, "I'm filling my bunker at home with good quality scotch, and I would advise everyone else to do the same." Noting how "stunningly successful" humans have been in controlling spam and viruses, he warned, "We are very, very vulnerable at the moment."

ALSO:

Mary Brown: "Obamacare" foe, and broke

Red meat will kill you? Stick a fork in me, I'm done

Al Gore and Sean Parker call for "Occupy Washington"

-- Jon Healey

Credit: Denis Poroy / Associated Press

 

Comments () | Archives (0)

The comments to this entry are closed.


Connect

Advertisement

In Case You Missed It...

Video


Categories


Recent Posts
Reading Supreme Court tea leaves on 'Obamacare' |  March 27, 2012, 5:47 pm »
Candidates go PG-13 on the press |  March 27, 2012, 5:45 am »
Santorum's faulty premise on healthcare reform |  March 26, 2012, 5:20 pm »

Archives
 


About the Bloggers
The Opinion L.A. blog is the work of Los Angeles Times Editorial Board membersNicholas Goldberg, Robert Greene, Carla Hall, Jon Healey, Sandra Hernandez, Karin Klein, Michael McGough, Jim Newton and Dan Turner. Columnists Patt Morrison and Doyle McManus also write for the blog, as do Letters editor Paul Thornton, copy chief Paul Whitefield and senior web producer Alexandra Le Tellier.



In Case You Missed It...