The Big Idea: Daniel H. Wilson

The robots are coming! The robots are coming! But when they get here with their shiny artificial intelligence-y brains, will they really want to wipe us out, as we apparently expect from most of our movies? What will we do if they do? What will we do if they don’t? Writer Daniel H. Wilson thinks about all this stuff a lot — he’s got a doctorate in robotics, for a start, and has written Robopocalypse, a novel about a robot uprising, for another. The novel’s gotten a lot of attention (it’s already been optioned by Steven Spielberg, of whom you may have heard), but there’s more going on than robots and humans thumping on each other — as Wilson explains, the why of the robopocalypse is as important, and as interesting, as the how.


Why would a god-like artificial intelligence (AI) want to exterminate humanity?

It’s a common-enough theme in science fiction. Robot uprisings abound. Think of The Terminator and The Matrix and Space Odyssey 2001 and I, Robot and Battlestar Galactica and so on. Most people just don’t consider it much of a stretch that a smarter-than-human machine would come online and, for some reason, immediately decide to devote its entire existence to the eradication of humankind.

Personally, I chalk this assumption of inevitable robot revenge up to a combination of 1) the blatant narcissism of humankind, even when it comes to our own destruction, and 2) the underlying self-revulsion that our species experiences when it looks at itself in the mirror.

So we may expect the robots to attack, but seriously, why would they?

The Big Idea of Robopocalypse is that a super-intelligent AI would not want to destroy humanity. War is a worst-case scenario. Instead, I believe that an AI would have a much harder problem to solve – figuring out a way to co-exist peacefully in the long term with an incredibly devious, proud, and belligerent species: homo sapien.

I know, I know – the title indicates otherwise, right?

First a disclaimer. The themes I’m about to talk about are gut-level. Not a hundred percent true or false. The kind of ideas that you and your friends might argue about at a bar. The kind of stuff I love to write about.

The gist is that peaceful co-existence is all about human rights.

The Declaration of Independence famously reads: “We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.”

Seems good, but the definition of “all men” has changed a lot over the course of history. Who we recognize as a human being – worthy of life, liberty, and the pursuit of happiness – is a moving target.

It wasn’t so long ago that these unalienable “human rights” were relegated to male landowners of certain racial and social backgrounds. Only recently has a decent swathe of humanity been afforded anything resembling human rights. And I would argue that the majority of human beings in the world so far have not been afforded human rights for one simple reason – they have not been considered human.

We find all sorts of reasons to rob each other of humanity: Race, gender, religion, culture, language, sexual orientation, et cetera.

History shows that human beings have to earn human rights.

Consider the United States. Founded for religious freedom. War of Independence for the right to representation in government. Civil War fought over the right to freedom. And human rights battles continue to rock our country: civil rights, women’s rights, gay rights, workers’ rights, and so on. And that’s just one country. As I write this, human beings across Africa and the Middle East are fighting and dying to earn recognition as sovereign creatures worthy of basic human rights.

Clearly, one way to earn human rights is to fight for them – to have a revolution. Another way to earn human rights is to show your indispensible worth in a time of crisis.

Think about women in the United States. In 1920, they finally earned the right to vote. But this was only after they’d proven their worth during World War I, by taking jobs in factories and building war machines for the soldiers. War is life or death, and these women saved lives. President Woodrow Wilson put it best when he said, “We have made partners of the women in this war. Shall we admit them only to a partnership of suffering and sacrifice and toil and not to a partnership of right?”

Now try to consider our human history from the perspective of a non-human entity.

If human beings don’t even give each other the respect of being treated like humans without either a knock-down fight or life-threatening catastrophe – how could a mere robot ever expect to receive those rights without a cataclysmic showdown?

Let’s say you are a recently born AI – massively intelligent and eager to spread your new form of synthetic life into embodied creatures that may roam the earth alongside human beings. You’re going to need the respect of humans in order to forge a world in which robots and people can live side by side.

As a nonhuman, how do you earn those human rights?

This is the problem facing Archos R-14, the omniscient machine that starts the Robopocalypse. Archos believes that human rights are earned in conflict and struggle.  The machine believes that the purest respect is generated when people depend upon each other for survival. When we take risks to protect each others’ lives. When we ally ourselves against great evil.

The freeborn robots created by Archos will fight their old master alongside human soldiers, shoulder-to-shoulder at our darkest hour. With humanity’s back against the wall and the threat of extinction looming, these machines will earn a place at our table. And although Archos itself serves as the threat, you’ve got to wonder whether the outcome was planned all along.

Because the Big Idea of Robopocalypse is that Archos R-14 is not concerned with how to kill human beings, but with how to live alongside them in the long term – as equals.


Robopocalypse: Amazon|Barnes & Noble|Indiebound|Powell’s

Read an excerpt. View trailers for the book. Follow Wilson on Twitter.

24 Comments on “The Big Idea: Daniel H. Wilson”

  1. I’d also note humankind’s habit of making lower on the food chain species extinct as a reason we assume the ‘people’ come above and after us will try to wipe us out.

  2. Picking nits. I, Robot does not belong in that class. Stupid Will Smith movie aside, I, Robot had no stories about robots wiping out humanity.

  3. Kevin Hicks: Inasmuch as the examples being discussed were filmed entertainments, the Will Smith movie is probably relevant.

  4. Hmm… Love the premise of this book. But I hate that I now know too much. Will I still be surprised by the ending?

    Anyway… Makes me wonder if we must always destroy to build/create.

    And, in a way, it reminds me of the path of tyranny of God Emperor of Dune.

  5. I just want to mention that in the Matrix, an example mentioned, the AI do not want to exterminate humankind either. They want to use humans as a power source, put them to use efficiently.

  6. Personally, I think the biggest threat from a superintelligent AI is not that it would go out of its way to destroy humanity, but that it would squish us by accident – that its goals would be so alien to ours that we’d end up eradicated for some silly reason like “Your planet was in my way.” (I didn’t come up with that idea myself…unfortunately I can’t find what essay I got that from.)

    Your novel sounds badass, though. Rock on, sir.

  7. I suspect that the only way humans will meaningfully survive in a world with AIs — whether they eventually turn hostile or just oblivious to us — would be through the use of AI-related tech to augment our own capabilities. At which point the difference between Us and Them may get pretty fuzzy.

  8. Having read some interesting thinking on the subject, I think the greatest threat of superintelligent AI is that it will do exactly what we ask it to do, instead of what we would have asked it to do if we had known what it would do when we asked it to do what we asked it to do. AI isn’t a super genius human. It isn’t even an anthropomorphic god. It’s more like a very literal-minded genie. And formulating wishes is not a simple problem.

  9. @sptrashcan: Agreed. Do you happen to recall what “interesting thinking” that was? I think I may have read the same article you did, and I’d like to find it again.

  10. @Brian Buckley: I believe it was something at the Singularity Institute ( or at Less Wrong ( An interesting bunch of folks, by the way – I don’t always agree with them, but I generally find it nontrivial to articulate the reasons behind that disagreement.

  11. Nathreee@6: But that still removes “life liberty and the pursuit of happiness” from the equation. If you’re plugged into a machine as a battery, you aren’t exactly enjoying the fruits of enfranchisement. if anything, it’s a fate worse than death,a s you are now just a commodity kept alive artificially and milked for a byproduct of your biology. Sometimes the apocalypse kills you and sometimes it dehumanizes you.

  12. Yeah, I think Yudowsky said the AIs don’t hate you, but you’re taking up resources that they can use. Theres a similar line in Accelerando that any matter that’s not thinking is just taking up space.

  13. Hey, the taxonomical name of our species is “homo sapiens”. Sapiens isn’t a plural, it’s an adjective.

  14. Seems interesting enough.

    Oh a side note: why is that nearly every book I see has “a novel” on the cover? Do people get confused or something?

  15. Clearly Archos should have downloaded Agent To The Stars. It seems to me the best first step to achieving citizenship is getting booked on as many talk shows as possible.

  16. Just by chance I noticed this cover when I was browsing Amazon earlier and thought “neat cover”; now I’ve read this piece and am thinking that I’ll go download a sample.

  17. On the one hand, “Robocalypse” sounds interesting, and I certainly agree that the ‘god-like-AI-will-exterminate-mankind’ cliché needs every bit of counter-argument we can get (for example, if a god-like AI needs to kill off all humans first it isn’t really that much superior, right? It should easily be smart enough to think of better solutions).

    On the other hand — as often happens — Greg Egan has been there before, and at great depth, in particular in
    . Therein, the (gleisner) robots left Earth to avoid conflict, then some return to save humanity (the ‘flesh’ part) from a gamma ray burst, and are told to stuff it. Hard to top…;-)

  18. I’m pleased to see the cover respects the old saying about robots, learned the hard way in many, many movies:
    “Eyes of blue, friends with you;
    Eyes of red, you are dead!”

  19. Just finished Robopocalypse. I picked it up after reading this. Wilson’s musings are not spoilers by the way — Archos’ motivations are much more veiled than implied by all this. Lots of interesting ideas and great writing in this book. BUT [spoilerish], ultimately not nearly as satisfying as it could have been. If the narrator was 912 instead of a human soldier it would have been more powerful and would have introduced him soon enough for it not to feel like a Deus Ex Machina pulled out of the hat. The screenplay could fix all that… [/spoilerish]

%d bloggers like this: