Gleb Garanich / Reuters Wooden model Cylon is posed to look out of the window of the flat of its maker, Ukrainian Dmitry Balandin, in Zaporizhzhya August 6, 2013.

The Coming Robot Dystopia

All Too Inhuman

Download Article
Listen to ArticleDownload Audio

The term “robotics revolution” evokes images of the future: a not-too-distant future, perhaps, but an era surely distinct from the present. In fact, that revolution is already well under way. Today, military robots appear on battlefields, drones fill the skies, driverless cars take to the roads, and “telepresence robots” allow people to manifest themselves halfway around the world from their actual location. But the exciting, even seductive appeal of these technological advances has overshadowed deep, sometimes uncomfortable questions about what increasing human-robot interaction will mean for society.

Robotic technologies that collect, interpret, and respond to massive amounts of real-world data on behalf of governments, corporations, and ordinary people will unquestionably advance human life. But they also have the potential to produce dystopian outcomes. We are hardly on the brink of the nightmarish futures conjured by Hollywood movies such as The Matrix or The Terminator, in which intelligent machines attempt to enslave or exterminate humans. But those dark fantasies contain a seed of truth: the robotic future will involve dramatic tradeoffs, some so significant that they could lead to a collective identity crisis over what it means to be human.

A robot is pictured in front of the Houses of Parliament and Westminster Abbey as part of the Campaign to Stop Killer Robots in London, April 2013.

This is a familiar warning when it comes to technological innovations of all kinds. But there is a crucial distinction between what’s happening now and the last great breakthrough in robotic technology, when manufacturing automatons began to appear on factory floors during the late twentieth century. Back then, clear boundaries separated industrial robots from humans: protective fences isolated robot workspaces, ensuring minimal contact between man and machine, and humans and robots performed wholly distinct tasks without interacting.

Such barriers have been breached, not only in the workplace but also in the wider society: robots now share the formerly human-only commons, and humans will increasingly interact socially with a diverse ecosystem of robots. The trouble is that the rich traditions of moral thought that guide human relationships have no equivalent when it comes to robot-to-human interactions. And of course, robots themselves have no innate drive to avoid ethical transgressions regarding, say, privacy or the protection of human life. How robots interact with people depends to a great deal on how much their creators know or care about such issues, and robot creators tend to be engineers, programmers, and designers with little training in ethics, human rights, privacy, or security. In the United States, hardly any of the academic engineering programs that grant degrees in robotics require the in-depth study of such fields.

One might hope that political and legal institutions would fill that gap, by steering and constraining the development of robots with the goal of reducing their potential for harm. Ideally, the rapid expansion of robots’ roles in society would be matched by equally impressive advances in regulation and in tort and liability law, so that societies could deal with the issues of accountability and responsibility that will inevitably crop up in the coming years. But the pace of change in robotics is far outstripping the ability of regulators and lawmakers to keep up, especially as large corporations pour massive investments into secretive robotics projects that are nearly invisible to government regulators.

We are hardly on the brink of the nightmarish futures conjured by The Matrix or The Terminator. But those dark fantasies contain a seed of truth: the robotic future will involve dramatic tradeoffs.

There is every reason to believe that this gap between robot capability and robot regulation will widen every year, posing all kinds of quandaries for law and government. Imagine an adaptive robot that lives with and learns from its human owner. Its behavior over time will be a function of its original programming mixed with the influence of its environment and “upbringing.” It would be difficult for existing liability laws to apportion responsibility if such a machine caused injury, since its actions would be determined not merely by computer code but also by a deep neural-like network that would have learned from various sources. Who would be to blame? The robot? Its owner? Its creator?

We face a future in which robots will test the boundaries of our ethical and legal frameworks with increasing audacity. There will be no easy solutions to this challenge—but there are some steps we can take to prepare for it. Research institutes, universities, and the authorities that regulate them must help ensure that people trained to design and build intelligent machines also receive a rigorous education in ethics. And those already on the frontlines of innovation need to concentrate on investing robots with true agency. Human efforts to determine accountability almost always depend on our ability to discover and analyze intention. If we are going to live in a world with machines who act more and more like people and who make ever more “personal” choices, then we should insist that robots also be able to communicate with us about what they know, how they know it, and what they want.

A DOUBLE-EDGED SWORD

For a good illustration of the kinds of quandaries that robots will pose by mixing clear social benefits with frustrating ethical dilemmas, consider the wheelchair. Today, more than 65 million people are confined to wheelchairs, contending with many more obstacles than their walking peers and sitting in a world designed for standing. But thanks to robotics, the next two decades will likely see the end of the wheelchair. Researchers at Carnegie Mellon; the University of California, Berkeley; and a number of other medical robotics laboratories are currently developing exoskeletal robotic legs that can sense objects and maintain balance. With these new tools, elderly people who are too frail to walk will find new footing, knowing that a slip that could result in a dangerous fracture will be far less likely. For visually impaired wheelchair users, exoskeletal robotic legs combined with computerized cameras and sensors will create a human-robot team: the person will select a high-level strategy—say, going to a coffee shop—and the legs will take care of the low-level operations of step-by-step navigation and motion.

Such outcomes would represent unqualified gains for humanity. But as robotic prosthetics enter the mainstream, the able-bodied will surely want to take advantage of them, too. These prosthetics will house sensors and cloud-connected software that will exceed the human body’s ability to sense, store, and process information. Such combinations are the first step in what futurists such as Hans Moravec and Ray Kurzweil have dubbed “transhumanism”: a post-evolutionary transformation that will replace humans with a hybrid of man and machine. To date, hybrid performance has mostly fallen short of conventional human prowess, but it is merely a matter of time before human-robot couplings greatly outperform purely biological systems.

In the United States, hardly any of the academic engineering programs that grant degrees in robotics require the in-depth study of ethics, human rights, privacy, or security.

These superhuman capabilities will not be limited to physical action: computers are increasingly capable of receiving and interpreting brain signals transmitted through electrodes implanted in the head (or arranged around the head) and have even demonstrated rudimentary forms of brain-based machine control. Today, researchers are primarily interested in designing one-way systems, which can read brain signals and then send them to devices such as prosthetic limbs and cars. But no serious obstacles prevent computer interfaces from sending such signals right back, arming a human brain with a silicon turbocharge. The ability to perform complex mathematical calculations, produce top-quality language translation, and even deliver virtuosic musical performances might one day depend not solely on innate skill and practice but also on having access to the best brain-computer hybrid architecture.

Such advantages, however, would run headlong into a set of ethical problems: just as a fine line separates genetic engineering from eugenics, so, too, is there no clear distinction between robotics that would lift a human’s capabilities to their organic limit and those that would vault a person beyond all known boundaries. Such technologies have the potential to vastly magnify the already-significant gaps in opportunity and achievement that exist between people of different economic means. In the robotic future, today’s intense debates about social and economic inequality will seem almost quaint.

EVERY STEP YOU TAKE

Democracy and capitalism rely on a common underlying assumption: if informed individuals acting rationally can express their free will, their individual choices will combine to yield the best outcome for society as a whole. Both systems thus depend on two conditions: people must have access to information and must have the power to make choices. The age of “big data” promises greater access to information of all kinds. But robotic technologies that collect and interpret unprecedented amounts of data about human behavior actually threaten both access to information and freedom of choice.

A fundamental shift has begun to take place in the relationship between automation technologies and human behavior. Conventional interactions between consumers and firms are based on direct economic exchanges: consumers pay for goods and services, and firms provide them. In the digital economy, however, consumers benefit more and more from seemingly free service, while firms profit not by directly charging consumers but by collecting and then monetizing information about consumers’ behavior, often without their knowledge or acquiescence. This kind of basic data mining has become commonplace: think, for example, of how Google analyzes users’ search histories and e-mail messages in order to determine what products they might be interested in buying and then uses that information to sell targeted advertising space to other firms.

Zac Vawter, a 31-year-old software engineer, uses the world's first neural-controlled Bionic leg in Chicago, November 2012.

As more automation technologies begin to appear in the physical world, such processes will become even more invasive. In the coming years, digital advertisements will incorporate pupil-tracking technology—currently in development at Carnegie Mellon and elsewhere—that can monitor the gazes of passersby from meters away. Fitted with sophisticated cameras and software that can estimate a passerby’s age and gender and observe facial cues to recognize moods and emotions, interactive billboards will not merely display static advertisements to viewers but also conduct ongoing tests of human responses to particular messages and stimuli, noting the emotional responses and purchasing behaviors of every subcategory of consumer and compiling massive, aggregated histories of the effect of each advertisement.

This very concept was depicted in the 2002 science-fiction film Minority Report during a scene in which the protagonist (played by Tom Cruise) walks through a shopping center where holographic signs and avatars bombard him with marketing messages, calling out his name and offering him products and services specifically tailored to him. Far from suggesting a shopper’s paradise, the scene is deeply unsettling, because it captures the way that intelligent machines might someday push humans’ buttons so well that we will become the automatons, under the sway (and even control) of well-informed, highly social robots that have learned how to influence our behavior.

When an intelligent machine causes injury, who is to blame? The robot? Its owner? Its creator?

A less fantastic, shorter-term concern about the effects of robotics and machine learning on human agency and well-being revolves around labor. In The Second Machine Age, the economist Erik Brynjolfsson and the information technology expert Andrew McAfee demonstrate that robotic technology is increasingly more efficient than human labor, offering a significant return on investment when performing both routine manual jobs and simple mental tasks. Unlike human workers, whose collective performance doesn’t change much over time, robot employees keep getting more efficient. With each advance in robot capability, it becomes harder to justify employing humans, even in jobs that require specialized skills or knowledge. No fundamental barrier exists to stop the onward march of robots into the labor market: almost every job, blue collar and white collar, will be at risk in an age of exponential progress in computing and robotics. The result might be higher unemployment, which, in turn, could contribute to rising economic inequality, as the wealth created by new technologies benefits fewer and fewer people.

ONE SINGULAR SENSATION

In discussions and debates among technologists, economists, and philosophers, such visions of the future sit alongside a number of less grim prognostications about what the world will look like once artificial intelligence and machine learning have produced the “technological singularity”: computer systems that can themselves invent new technologies that surpass those created by their original human creators. The details of such predictions vary depending on the forecaster. Some, such as Moravec, foresee a post-evolutionary successor to Homo sapiens that will usher in a new leisure age of comfort and prosperity. Others envision robotic vessels able to “upload” human consciousness. And Kurzweil has suggested that the technological singularity will offer people a kind of software-based immortality.

These long-term views, however, can distract from the more prosaic near-term consequences of the robotics revolution—not the great dislocations caused by a superhuman machine consciousness but rather the small train wrecks that will result from the spread of mediocre robot intelligence. Today, nearly all our social interactions take place with other humans, but we are on the cusp of an era in which machines will become our usual interlocutors. Our driverless cars will join in our fights with one another over parking spots: when an argument leads to a fender-bender, we will insist to our robot mechanics that they have not repaired our robot cars properly. We will negotiate with robot hostesses for corner tables at restaurants where the food is prepared by robot chefs. Every day, we will encounter robots, from hovering drones to delivery machines to taxis, that will operate seamlessly with and without human remote control; daily life will involve constantly interacting with machines without knowing just how much another person might be involved in the machine’s response. There will be no room in such infinitely adjustable human-robot systems for us to treat robots one way and humans another; each style of interaction will infect the other, and the result will be an erosion of our sense of identity.

Terminator.

But the result need not be a robot dystopia. A clear set of decisions about robot design and regulation stand between today’s world of human agency and tomorrow’s world of robot autonomy. Inventors must begin to combine technological ingenuity with sociological awareness, and governments need to design institutions and processes that will help integrate new, artificial agents into society. Today, all civil engineers are required to study ethics because an incorrectly designed bridge can cause great public harm. Roboticists face this same kind of responsibility today, because their creations are no longer mere academic pursuits. Computer science departments, which typically sponsor robotics research, must follow the lead of civil engineering departments and require that every degree candidate receive sufficient training in ethics and some exposure to sociology.

But preparing tomorrow’s robot creators will help only so much; the clock is ticking, and today’s roboticists must begin to think more clearly about how to build intelligent machines able to integrate themselves into societies. An important first step would be to make clear distinctions between robotic appliances and robotic agents. Robots that follow fixed directions and make no autonomous decisions should wear their limited cognitive abilities on their sleeves. This means they should not have faces, and they should not speak or communicate like people or express human emotions: a robotic vacuum cleaner shouldn’t tell its owner that it misses him when he’s at work. As for robots designed to formulate goals, make decisions, and convince people of their agency, they need to grow up. If roboticists want such machines to have anthropomorphic qualities, then their robots must also accept direct accountability: people must be able to question these machines about their knowledge, their goals, their desires, and their intentions.

Knowledge and transparency, the most valuable goods promised by the dawn of the information age in the last century, will take on even greater importance in the age of automation. Educators and regulators must help robot inventors acquire knowledge, and the inventors, in turn, must pledge to create more transparent artificial beings.

Browse Related Articles on {{search_model.selectedTerm.name}}

{{indexVM.results.hits.total | number}} Articles Found

  • {{bucket.key_as_string}}