The Big Idea: David Walton
Don’t look now, but intelligent robots are about to decide if you live or die.
Somehow, while we weren’t paying attention, we slipped into a universe where the robots from Isaac Asimov’s “Three Laws of Robotics” stories are about to surround us by the millions. The self-driving cars being sold by Tesla and other manufacturers aren’t quite there yet, but we are quickly entering a world where AIs will be making moment by moment choices about your survival. Consider this scenario: Your car is driving you down a two-lane highway with concrete dividers on either side when an I-beam falls off the truck ahead of you. In the other lane is a motorcycle. Should your car swerve, missing the I-beam but hitting the motorcyclist? Or try to brake, knowing it can’t stop in time and possibly killing you? A human driver would act on reflex, but a computer has plenty of time to consider the options and decide who should survive.
My initial “Big Idea” for Three Laws Lethal was simply: Why isn’t anyone writing novels about this?
It’s a topic so overflowing with drama it was hard to choose a focus for the book. Should I write about a tense legal battle over who is responsible for a deadly crash? What about terrorists who hack cars to kidnap passengers, or use them to deliver bombs anonymously? Or maybe it’s the battle between proprietary algorithms kept secret by big corporations vs. open algorithms that consumers can replace by downloading those they like better? Or maybe a deadly war between competing companies to destroy each other’s reputations by causing the others’ algorithms to fail?
In the end, Three Laws Lethal includes all of these scenarios, but its central Big Idea is something that draws all of them together. As all of this drama is unfolding in the outside world, a young female programmer recognizes what others don’t: The AIs driving the cars are exhibiting some surprising emergent behavior. The AIs are trained in a virtual game world, one that uses evolutionary principles so that only the best of them survive to be used in real life. But after thousands of generations, the AIs are evolving survival tactics that reach outside of their expected parameters. In short: the cars are developing goals of their own.
I had something of a eureka moment in the early outlining for this novel when my daughter Naomi–a quiet, caring, quirky introvert–complained that the characters in the books she read were never like her. I realized that her personality was exactly what this novel needed. An introverted, book-loving programmer who struggles with social anxiety would be more likely to sympathize with the AIs than with other humans. So with her permission, I added eight years to her age and made her a main character.
But as I wrote the book, I was left with a question, given Naomi’s empathy for the AIs: Would she warn humanity of the threat? Or would she help the AIs achieve their goals?