The Big Idea: Adrian J. Walker
Posted on April 30, 2020 Posted by John Scalzi 7 Comments
For his new novel The Human Son, author Adrian J. Walker decided to get into a different mindset entirely. A very very different mindset.
ADRIAN J. WALKER:
Human beings make terrible decisions. I wrote The Human Son in the shadow of Brexit, so I know what I’m talking about.
Don’t worry, that wasn’t a political statement. I’m not suggesting anyone was wrong or right in that particular vote; rather that everyone was, and has been in every single vote that’s ever been taken.
Let me explain.
The Human Son begins with a decision. 500 years after they were genetically engineered to fix climate change, a small population of advanced beings called the erta gather in a hall and discuss what to do next. Their purpose has been fulfilled and the earth is rebalanced, but at a cost; in order to fix the planet, humanity had to be allowed to die out. Now they must decide whether or not to resurrect it, but they quickly realise that they lack the right data to make this decision. To remedy this, a quiet and clinical atmospheric chemist named Ima — our hero — volunteers to raise a single human child as her own by way of experiment. This, as every parent will know, leads to unexpected results.
As I wrote about Ima’s life and the (at first) utopian existence of her species, I watched what would be four years of political strife unfold in my own timeline and wondered what the erta would make of it all. The difference between their decision-making abilities and ours became one of the book’s big ideas.
Faced with the monumental task of fixing a broken planet, the erta know immediately what needs to be done first: remove humans. Their lack of human frailties like fear, desire and agenda combined with a supreme scientific prowess allows them to identify every global system at play, and specifically those which are most difficult to predict and control. As it turns out, these systems are the social, economic and psychological behaviours of humans themselves. The data is right there in front of them, and so the erta’s decision is swift; so long, sapiens, and thanks for all the carbon.
Compare this with the decision making process of Brexit, or any other great democratic enterprise for that matter. Ask 65 million furious little boxes of fears, hopes, and neuroses to make a gigantic choice with little or no background information and then, to help them decide, shout slogans at them.
Ima would be baffled at such a process. ‘But where is the data?’ she would ask.
The Human Son is told in the first person, and narrating from Ima’s clinical and sometimes harsh perspective had a big impact on my writing. At the start of the book she is a perfect example of her species, seeing things purely as they are rather than what they are like. Simile, metaphor and poetry are of no use to her; in fact, she can’t stand them, so they don’t feature at all in her writing. I was surprised at how fun it was to write in this style, and how liberating it is to describe the world precisely as it appears rather than through the filter of prose. Even more enjoyable was allowing Ima’s voice to develop through the book; her journey as a parent leads her to realise that sometimes truth lies not just in words themselves, but in the space between them. By the final chapter her voice has changed immeasurably.
The more time I spent with Ima the more I thought about what it would mean to delegate to an intelligence such as hers, free from the human gravities of desire and agenda. Fear dominates most discussions about machine intelligence and, to many, the concept of allowing a non-human entity to make human decisions is horrifying. We may as well just boot up SkyNet now and give it the launch codes while we’re at it, right? And even if it didn’t blow us up, what about the humanity? All those fiddly little human nuances we hold so dear. How could any non-human intelligence know what’s best for us?
But such intelligences are already in place and developing, though in arguably more mundane ways and with fewer guns. Big Data allows us to predict social, economic and psychological behaviour with increasing accuracy, and meteorological and geological modelling software is improving by the day. I wonder what we would do if some future amalgamation of all these systems attempted to give us advice. What would be the reaction if it was able to predict with indisputable accuracy the outcome of a political decision? Would it be heard? Or would it be scoffed at, as experts tend to be?
And what if these systems become accessible to us on an individual level? What if we could tap into all this data and use it to help us make decisions about our own lives? And I’m not just talking about the things we already ask of computers — which route, which insurance package, which book, etc., but rather: Do I take that job? Vote for her? Marry him?
Would we listen to it? Our would we revert to our trusty intuitions — those ‘gut instincts’ we’re so proud of yet which, if we’re honest, so often fail us?
If most of us would do the latter, then it’s because human decision making is as much about asserting an ideal as it is about making the right choice, whether for ourselves or for the other 7 billion bundles of neuroses stumbling around the planet. This means that if we want to develop technology to help us make better decisions, then we must also find a way of abandoning our agendas, desires, and fears. Like the erta, we would need to cast off that which drags us down.
Whether or not this occurs through cultural shift or rapid transhuman evolution, it will ultimately come down to yet another choice: do we want to remain as we are and continue to stumble, or fly and risk losing our souls?
—-
The Human Son: Amazon|Barnes & Noble|Indiebound|Powell’s
Visit the author’s site. Follow him on Twitter.
Oh goodness, not only am I behind in my reading, I’m behind in my reading of your (John Scalzi) books. (They’re all purchased, sitting in my TBR bookcase.) And how do you help me…? By introducing me to another book that sounds really good and that I’ve just added to my wishlist to be purchased next time I place an order. Okay, if that’s how you want to play the game… ;-)
Joking aside, thanks for bringing this to my attention, sounds very interesting. Definitely want to read this.
The premise of this novel reminds me of the premise in Eric McCormack’s sci-fi series, Travelers: a very advanced AI sends people back in time to fix things that went wrong right at the beginning of the 21st century.
Netflix cancelled Travelers after three seasons. And what an exit it made. Six months after watching the final three episodes, I still think about it almost every day.
I will have to give Mr. Walker a try.
This struck me as odd:
” This means that if we want to develop technology to help us make better decisions, then we must also find a way of abandoning our agendas, desires, and fears. Like the erta, we would need to cast off that which drags us down.”
After all, decisions about what to do are based upon what we want to accomplish – that is: what are our “agendas, desires, and fears.” Absent any such things, there is nothing upon which to make a decision, and no way to make one – and arguably no “decision” even to be made.
“Sounds” interesting as in speculation, after all it is fictional. Pew research estimates 80% of global population is “religious” ( the degree of that probably not quantifiable ) so magical thinking is the “rule” so unlikely any such event will interrupt our collective return random scratch and giggles with the appropriate supporting technology ( rock, flint/obsidian & basic death by bacterial infection ). Check Haiti for appropriate updates and current trends in governance, yes I know there are more or other examples.
This sounds fascinating. Enough so that I’m breaking my usual rule and buying the book sight unseen. Especially with authors I’ve not read before, I usually check it out of the library first then buy if I enjoyed it enough.
(This is about the Big Idea articles in general, sorry.)
I highly doubt I’m the first to suggest this, but y’ never know.
It’d be fun to read something like this, sometime:
The Big Idea: John Scalzi
When you’re ending a galaxy-spanning trilogy by ending the galactic empire it takes place in, how do you surprise readers who may well have read many other space operas and think they’re prepared for anything you throw at them? Author John Scalzi had to tackle this very question!
I realize that this entire blog is your very own Big Idea, but even so, I still think that would be a fun article to read.
(Oops, I should have said “that threatens to end the galactic empire”. I haven’t red the Last Emperox yet, so if I put a spoiler, it was unintentional and I was unaware of it.)