The Big Idea: Adrian J. Walker

For his new novel The Human Son, author Adrian J. Walker decided to get into a different mindset entirely. A very very different mindset.

ADRIAN J. WALKER:

Human beings make terrible decisions. I wrote The Human Son in the shadow of Brexit, so I know what I’m talking about.

Don’t worry, that wasn’t a political statement. I’m not suggesting anyone was wrong or right in that particular vote; rather that everyone was, and has been in every single vote that’s ever been taken.

Let me explain.

The Human Son begins with a decision. 500 years after they were genetically engineered to fix climate change, a small population of advanced beings called the erta gather in a hall and discuss what to do next. Their purpose has been fulfilled and the earth is rebalanced, but at a cost; in order to fix the planet, humanity had to be allowed to die out. Now they must decide whether or not to resurrect it, but they quickly realise that they lack the right data to make this decision. To remedy this, a quiet and clinical atmospheric chemist named Ima — our hero — volunteers to raise a single human child as her own by way of experiment. This, as every parent will know, leads to unexpected results.

As I wrote about Ima’s life and the (at first) utopian existence of her species, I watched what would be four years of political strife unfold in my own timeline and wondered what the erta would make of it all. The difference between their decision-making abilities and ours became one of the book’s big ideas.

Faced with the monumental task of fixing a broken planet, the erta know immediately what needs to be done first: remove humans. Their lack of human frailties like fear, desire and agenda combined with a supreme scientific prowess allows them to identify every global system at play, and specifically those which are most difficult to predict and control. As it turns out, these systems are the social, economic and psychological behaviours of humans themselves. The data is right there in front of them, and so the erta’s decision is swift; so long, sapiens, and thanks for all the carbon.

Compare this with the decision making process of Brexit, or any other great democratic enterprise for that matter. Ask 65 million furious little boxes of fears, hopes, and neuroses to make a gigantic choice with little or no background information and then, to help them decide, shout slogans at them.

Ima would be baffled at such a process. ‘But where is the data?’ she would ask.

The Human Son is told in the first person, and narrating from Ima’s clinical and sometimes harsh perspective had a big impact on my writing. At the start of the book she is a perfect example of her species, seeing things purely as they are rather than what they are like. Simile, metaphor and poetry are of no use to her; in fact, she can’t stand them, so they don’t feature at all in her writing. I was surprised at how fun it was to write in this style, and how liberating it is to describe the world precisely as it appears rather than through the filter of prose. Even more enjoyable was allowing Ima’s voice to develop through the book; her journey as a parent leads her to realise that sometimes truth lies not just in words themselves, but in the space between them. By the final chapter her voice has changed immeasurably.

The more time I spent with Ima the more I thought about what it would mean to delegate to an intelligence such as hers, free from the human gravities of desire and agenda. Fear dominates most discussions about machine intelligence and, to many, the concept of allowing a non-human entity to make human decisions is horrifying. We may as well just boot up SkyNet now and give it the launch codes while we’re at it, right? And even if it didn’t blow us up, what about the humanity? All those fiddly little human nuances we hold so dear. How could any non-human intelligence know what’s best for us?

But such intelligences are already in place and developing, though in arguably more mundane ways and with fewer guns. Big Data allows us to predict social, economic and psychological behaviour with increasing accuracy, and meteorological and geological modelling software is improving by the day. I wonder what we would do if some future amalgamation of all these systems attempted to give us advice. What would be the reaction if it was able to predict with indisputable accuracy the outcome of a political decision? Would it be heard? Or would it be scoffed at, as experts tend to be?

And what if these systems become accessible to us on an individual level? What if we could tap into all this data and use it to help us make decisions about our own lives? And I’m not just talking about the things we already ask of computers — which route, which insurance package, which book, etc., but rather: Do I take that job? Vote for her? Marry him?

Would we listen to it? Our would we revert to our trusty intuitions — those ‘gut instincts’ we’re so proud of yet which, if we’re honest, so often fail us?

If most of us would do the latter, then it’s because human decision making is as much about asserting an ideal as it is about making the right choice, whether for ourselves or for the other 7 billion bundles of neuroses stumbling around the planet. This means that if we want to develop technology to help us make better decisions, then we must also find a way of abandoning our agendas, desires, and fears. Like the erta, we would need to cast off that which drags us down.

Whether or not this occurs through cultural shift or rapid transhuman evolution, it will ultimately come down to yet another choice: do we want to remain as we are and continue to stumble, or fly and risk losing our souls?

—-

The Human Son: Amazon|Barnes & Noble|Indiebound|Powell’s

Visit the author’s site. Follow him on Twitter.