The Big Idea: David Walton

In his Big Idea for Three Laws Lethal, author David Walton introduces you to those who hold your life in their (figurative) hands — whether you like it or not.


Don’t look now, but intelligent robots are about to decide if you live or die.

Somehow, while we weren’t paying attention, we slipped into a universe where the robots from Isaac Asimov’s “Three Laws of Robotics” stories are about to surround us by the millions. The self-driving cars being sold by Tesla and other manufacturers aren’t quite there yet, but we are quickly entering a world where AIs will be making moment by moment choices about your survival. Consider this scenario: Your car is driving you down a two-lane highway with concrete dividers on either side when an I-beam falls off the truck ahead of you. In the other lane is a motorcycle. Should your car swerve, missing the I-beam but hitting the motorcyclist? Or try to brake, knowing it can’t stop in time and possibly killing you? A human driver would act on reflex, but a computer has plenty of time to consider the options and decide who should survive.

My initial “Big Idea” for Three Laws Lethal was simply: Why isn’t anyone writing novels about this?

It’s a topic so overflowing with drama it was hard to choose a focus for the book. Should I write about a tense legal battle over who is responsible for a deadly crash? What about terrorists who hack cars to kidnap passengers, or use them to deliver bombs anonymously? Or maybe it’s the battle between proprietary algorithms kept secret by big corporations vs. open algorithms that consumers can replace by downloading those they like better? Or maybe a deadly war between competing companies to destroy each other’s reputations by causing the others’ algorithms to fail?

In the end, Three Laws Lethal includes all of these scenarios, but its central Big Idea is something that draws all of them together. As all of this drama is unfolding in the outside world, a young female programmer recognizes what others don’t: The AIs driving the cars are exhibiting some surprising emergent behavior. The AIs are trained in a virtual game world, one that uses evolutionary principles so that only the best of them survive to be used in real life. But after thousands of generations, the AIs are evolving survival tactics that reach outside of their expected parameters. In short: the cars are developing goals of their own.

I had something of a eureka moment in the early outlining for this novel when my daughter Naomi–a quiet, caring, quirky introvert–complained that the characters in the books she read were never like her. I realized that her personality was exactly what this novel needed. An introverted, book-loving programmer who struggles with social anxiety would be more likely to sympathize with the AIs than with other humans. So with her permission, I added eight years to her age and made her a main character.

But as I wrote the book, I was left with a question, given Naomi’s empathy for the AIs: Would she warn humanity of the threat? Or would she help the AIs achieve their goals?


Three Laws Lethal – USA: Amazon | Barnes and Noble | BAM | IndieBound | Audible
Three Laws Lethal – Canada: | Indigo

Visit the author’s site.  Follow him on Twitter or Facebook.

9 Comments on “The Big Idea: David Walton”

  1. One of those “AI-driven car” labs is in the Kanata “borough” of Ottawa, the west end of the city. It got built two years ago, and their labs are testing product as I type this.

  2. In a perfect world, of course the AI swerves—because it’s already talking to the AI on the motorcycle, which takes its own evasive action. I think the big mistake in self-driving car development is the “self” part.

  3. I got an arc of this for review (my review is over at the Future SF website) and this book knocked my [proverbial] socks off! David Walton is a fantastic writer, and this is a can’t-put-it-down, thought-provoking, helluva ride.

  4. “In short: the cars are developing goals of their own.”
    I had a car like that once. It was a Ford Granada and its goal was to never start or go anywhere.

    But seriously, this sounds good. And zeitgeisty. I, for one, welcome our new Artificial Overlords!

  5. This sounds great, very thought provoking. I guess in a few years time, AI cars will be in common use.It will be interesting to see what effect is has on the accident rate.
    As for the I beam and the motorcyclist, well, sorry, but the poor old biker is going to have to go.

  6. “Why isn’t anyone writing novels about this?”

    You quote Asimov’s three laws of artificial intelligence, a concept introduced in the 1940’s, and then wonder why no one has written about ai?

    The only reason the 3 laws are ever introduced in a story is so the robot can violate those laws for dramatic effect. I Robot had vicki try to take over the world by force in order to “protect” the world. The original Star Trek had a number of episodes about AI run amok in the 1960’s. There was a story about a dairy product that became sentient, helped humans for a while, then went off on its own space exploration. That is essentially the origin story of star trek the motion picture where we see the end result of a simple earth space probe develop intelligence and accumulate power over the course of centuries. Its program was basic: explore, record, and report back to its creator, but with the quirks of AI, it took on its own interpretation of that.

    The only difference between then and now is then we didnt know how ai would work, and now we have a sense of what deep learning is, and how we basically have AI program themselves, and therefore dont know the underlying assumptions they create in their programming.

    But the effect is the same: humans create robots for one thing and then they start doing their own thing.

    There have been stories about that for decades.

%d bloggers like this: