The Magazine and Website of the Science Fiction & Fantasy Field

Locus Online
  
Sub Menu contents

Recent Posts

Archives

 




 

Cory Doctorow: Skynet Ascendant

As I’ve written here before, science fiction is terrible at predicting the future, but it’s great at predicting the present. SF writers imagine all the futures they can, and these futures are processed by a huge, dynamic system consisting of editors, booksellers, and readers. The futures that attain popular and commercial success tell us what fears and aspirations for technology and society are bubbling in our collective imaginations.

When you read an era’s popular SF, you don’t learn much about the future, but you sure learn a lot about the past. Fright and hope are the inner and outer boundaries of our imagination, and the stories that appeal to either are the parameters of an era’s political reality.

Pay close attention to the impossibilities. When we find ourselves fascinated by faster than light travel, consciousness uploading, or the silly business from The Matrix of AIs using human beings as batteries, there’s something there that’s chiming with our lived experience of technology and social change.

Postwar SF featured mass-scale, state-level projects, a kind of science fictional New Deal. Americans and their imperial rivals built cities in space, hung skyhooks in orbit, even made Dyson Spheres that treated all the Solar System’s matter as the raw material for the a new, human-optimized megaplanet/space-station that would harvest every photon put out by our sun and put it to work for the human race.

Meanwhile, the people buying these books were living in an era of rapid economic growth, and even more importantly, the fruits of that economic growth were distributed to the middle class as well as to society’s richest. This was thanks to nearly unprecedented policies that protected tenants at the expense of landlords, workers at the expense of employers, and buy­ers at the expense of sellers. How those policies came to be enacted is a question of great interest today, even as most of them have been sunsetted by successive governments across the developed world.

Thomas Piketty’s data-driven economics bestseller Capital in the Twenty-First Century argues that the vast capital destruction of the two World Wars (and the chaos of the interwar years) weakened the grip of the wealthy on the governments of the world’s developed states. The arguments in favor of workplace safety laws, taxes on capital gains, and other policies that undermined the wealthy and benefited the middle class were not new. What was new was the political possibility of these ideas.

As developed nations’ middle classes grew, so did their material wealth, political influence, and expectations that governments would build am­bitious projects like interstate highways and massive civil engineering projects. These were politically popular – because lawmakers could use them to secure pork for their voters – and also lucrative for government contractors, making ‘‘Big Government’’ a rare point of agreement between the rich and middle-income earners.

(A note on poor people: Piketty’s data suggests that the share of the national wealth controlled by the bottom 50% has not changed much for several centuries – eras of prosperity are mostly about redistributing from the top 10-20% to the next 30-40%)

Piketty hypothesizes that the returns on investment are usually greater than the rate of growth in an economy. The best way to get rich is to start with a bunch of money that you turn over to professional managers to invest for you – all things being equal, this will make you richer than you could get by inventing something everyone uses and loves. For example, Piketty contrasts Bill Gates’s fortunes as the founder of Microsoft, once the most profitable company in the world, with Gates’s fortunes as an investor after his retirement from the business. Gates-the-founder made a lot less by creating one of the most successful and profitable products in history than he did when he gave up making stuff and started owning stuff for a living.

By the early 1980s, the share of wealth controlled by the top decile tipped over to the point where they could make their political will felt again – again, Piketty supports this with data showing that nations elect seriously investor-friendly/worker-unfriendly governments when investors gain control over a critical percentage of the national wealth. Leaders like Reagan, Thatcher, Pinochet, and Mulroney enacted legislative reforms that reversed the post-war trend, dis­mantling the rules that had given skilled workers an edge over their employers – and the investors the employers served.

The greed-is-good era was also the cyberpunk era of literary globalized corporate dystopias. Even though Neuromancer and Mirrorshades predated the anti-WTO protests by a decade and a half, they painted similar pictures. Educated, skilled people – people who comprised the mass of SF buyers – became a semi-disposable under­class in world where the hyperrich had literally ascended to the heavens, living in orbital luxury hotels and harvesting wealth from the bulk of humanity like whales straining krill.

Seen in this light, the vicious literary feuds between the cyberpunks and the old guard of space-colonizing stellar engineer writers can be seen as a struggle over our political imagination. If we crank the state’s dials all the way over the right, favoring the industrialist ‘‘job creators’’ to the exclusion of others, will we find our way to the stars by way of trickle-down, or will the overclass graft their way into a decadent New Old Rome, where reality TV and hedge fund raids consume the attention and work we once devoted to exploring our solar system?

Today, wealth disparity consumes the popular imagination and political debates. The front-running science fictional impossibility of the unequal age is rampant artificial intelligence. There were a lot of SF movies produced in the mid-eighties, but few retain the currency of the Termina­tor and its humanity-annihilating AI, Skynet. Everyone seems to thrum when that chord is plucked – even the NSA named one of its illegal mass surveillance programs SKYNET.

It’s been nearly 15 years since the Matrix movies debuted, but the Red Pill/Blue Pill business still gets a lot of play, and young adults who were small children when Neo fought the AIs know exactly what we mean when we talk about the Matrix.

Stephen Hawking, Elon Musk, and other luminaries have issued pan­icked warnings about the coming age of humanity-hating computerized overlords. We dote on the party tricks of modern AIs, sending half-admiring/half-dreading laurels to the Watson team when it manages to win at Jeopardy or randomwalk its way into a new recipe.

The fear of AIs is way out of proportion to their performance. The Big Data-trawling systems that are supposed to find terrorists or figure out what ads to show you have been a consistent flop. Facebook’s new growth model is sending a lot of Web traffic to businesses whose Facebook followers are increasing, waiting for them to shift their major commercial strategies over to Facebook marketing, then turning off the traffic and demanding recur­ring payments to send it back – a far cry from using all the facts of your life to figure out that you’re about to buy a car before even you know it.

Google’s self-driving cars can only operate on roads that humans have mapped by hand, manually marking every piece of street-furniture. The NSA can’t point to a single terrorist plot that mass-surveillance has disrupted. Ad personalization sucks so hard you can hear it from orbit.

We don’t need artificial intelligences that think like us, after all. We have a lot of human cognition lying around, going spare – so much that we have to create listicles and other cognitive busy-work to absorb it. An AI that thinks like a human is a redundant vanity project – a thinking version of the ornithopter, a useless mechanical novelty that flies like a bird.

We need machines that don’t fly like birds. We need AI that thinks unlike humans. For example, we need AIs that can be vigilant for bomb-parts on airport X-rays. Humans literally can’t do this. If you spend all day looking for bomb-parts but finding water bottles, your brain will rewire your neurons to look for water bottles. You can’t get good at something you never do.

What does the fear of futuristic AI tell us about the parameters of our present-day fears and hopes?

I think it’s corporations.

We haven’t made Skynet, but we have made these autonomous, transhuman, transnational technolo­gies whose bodies are distributed throughout our physical and economic reality. The Internet of Things version of the razorblade business model (sell cheap handles, use them to lock people into buying expensive blades) means that the products we buy treat us as adversaries, checking to see if we’re breaking the business logic of their makers and self-destructing if they sense tampering.

Corporations run on a form of code – financial regulation and accounting practices – and the modern version of this code literally prohibits corporations from treating human beings with empathy. The principle of fiduciary duty to inves­tors means that where there is a chance to make an investor richer while making a worker or customer miserable, management is obliged to side with the investor, so long as the misery doesn’t backfire so much that it harms the investor’s quarterly return.

We humans are the inconvenient gut-flora of the corporation. They aren’t hostile to us. They aren’t sympathetic to us. Just as every human carries a hundred times more non-human cells in her gut than she has in the rest of her body, every corpora­tion is made up of many separate living creatures that it relies upon for its survival, but which are fundamentally interchangeable and disposable for its purposes. Just as you view stray gut-flora that attacks you as a pathogen and fight it off with anti­biotics, corporations attack their human adversaries with an impersonal viciousness that is all the more terrifying for its lack of any emotional heat.

The age of automation gave us stories like Chap­lain’s Modern Times, and the age of multinational hedge-fund capitalism made The Matrix into an enduring parable. We’ve gone from being cogs to being a reproductive agar within which new cor­porations can breed. As Mitt Romney reminded us, ‘‘Corporations are people.’’


Comments

Comment from steven johnson
Time July 3, 2015 at 7:29 am

Most of the literati tell us that there is no difference between SF and fantasy. They tend to get quite incensed at the notion their product can’t be sold in all markets? But, the point here is that if we are talking about the present state of SFF tells us about the hopes and fears and fundamental perspectives of the population, it seems to me incontestable that zombies are much more relevant to people than futuristic AI. Could I suggest that the zombies are an expression of fear of the rebellious masses from outside? I suppose you could consider zombiism a kind of antigen, designed to alert us to enemies of the happy days we have now.

Comment from HANS BERNHARD
Time July 4, 2015 at 1:21 am

I have used the term ‘Skynet’ for quite a while to describe how we should start looking at AI differently and understand that corporations are AI organisms and that we lost control a long time ago. It will not be the intelligent and independent self-evolving robot but the superorganism that legally and financially overpowers us, evolves and adapts to laws, opposition and markets. Hence thanks a lot for this gut-flora metaphor, this helps my thinking/writing in this field. The lack of empathy and emotion is one of the key questions, but not necessarily the differentiation between human, machine and corporate organisms, there are anti-social personalities (psychopaths) that operate on a similar basis, which opens up the question for me, what psychological parameters do we have to look for in an organism to determine whether we can trust ‘it’ (us) or not..

Comment from David Orban
Time July 4, 2015 at 2:10 am

With the unstoppable innovation of the blockchain we are now writing a new chapter in the history of corporations, explicitly called DACs (Decentralized Autonomous Corporations). We can’t delay the challenge of building a science and engineering of morality. Only if it is explicitly modeled can autonomous cars, and autonomous enterprises make ethical decisions that are sound, and evolve to respect and empathize with human needs.

Comment from Matthew Bellows
Time July 4, 2015 at 5:47 am

Great analogy, although two forces may keep Skynet in check. Not through unions, but through talent, the most talented workers are much harder for a corporation to attract than eating yogurt. The development of moral, progressive, holistically minded corporations and their blessings by the regulators (B Corp) offers some reason for hope.

Comment from Les Carter
Time July 4, 2015 at 7:53 am

Poignant, thought-provoking, and spot-on.
A couple of other phenomena come to mind. As one very frustrated with electronic medical record systems, I’ve had to acknowledge that these systems serve the needs of those that pay for them (hospitals) and their regulatory burdens, not those required to use them nor necessarily patients.
Secondly the idea of humanity as a virus from whom the world must be protected is becoming more pervasive, IMHO.

Comment from Sean Richardson
Time July 4, 2015 at 9:44 am

So if we selectively breed ourselves to meet the needs of post-mechanistic corporations, we will create the perfect culture for breeding of inhuman cultures, hmm?

That’s going to be a (plus ça change …) chaotic recursive system, and as it comes around again to crisis, as ever how cultere will evolve and convolve will be indescribable.

That said, stories are being written now that will resonate a generation from now … and it will matter which ones spark imaginations.


© 2010-2014 by Locus Publications. All rights reserved. Powered by WordPress, modified from a theme design by Lorem Ipsum