The Big Idea: Ted Chiang
Posted on August 10, 2010 Posted by John Scalzi 33 Comments
We all love our tech — heck, I’m still swooning over that Droid X phone I bought a couple of weeks ago — but do we love our tech the way we love a partner, or a parent, or a child, or even a pet? The answer to this is, probably not, and if we do, there’s probably something a little bit off with us. But as Ted Chaing explains, by way of explaining his latest novella The Lifecycle of Software Objects, if we ever expect to get artificial intelligence that matches our own intelligence, a little love — real love — might be the thing we need to give to our tech.
TED CHIANG:
People routinely attempt to describe the human brain’s capabilities in terms of instructions per second, and then use that as a guidepost for predicting when computers when be as smart as people. I think this makes about as much sense as judging the brain by the amount of heat it generates. Imagine if someone were to say, “when we have a computer that runs as hot as a human brain, we will have a computer as smart as a human brain.” We’d laugh at such a claim, but people make similar claims about processing speed and for some reason they get taken seriously.
It’s been over a decade since we built a computer that could defeat the best human chess players, yet we’re nowhere near building a robot that can walk into your kitchen and cook you some scrambled eggs. It turns out that, unlike chess, navigating the real world is not a problem that can be solved by simply using faster processors and more memory. There’s more and more evidence that if we want an AI to have common sense, it will have to develop it in the same ways that children do: by imitating others, by trying different things and seeing what works, and most of all by accruing experience. This means that creating a useful AI won’t just be a matter of programming, although some amazing advances in software will definitely be required; it will also involve many years of training. And the more useful you want it to be, the longer the training will take.
But surely the training can be accelerated somehow, can’t it? I don’t believe so, or at least not easily. This seems related to the misconception that a faster computer is a smarter one, but with humans it’s easier to see that speed is not the same thing as intelligence. Suppose you had a digital simulation of Paris Hilton’s brain; no matter how fast a computer you run her on, she’s never going to understand differential equations. By the same token, if you run a child at twice normal speed, all you’d get is a child whose attention span has been cut in half, and how useful is that?
But surely the AI will be able to learn faster because it won’t be hampered by emotions, right? On the contrary, I think creating software that feels emotions will be a necessary step towards creating software that actually thinks, in much the same way that brains capable of emotion are an evolutionary predecessor to brains capable of thought. But even if it’s possible to separate thinking from feeling, there may be other reasons to give AIs emotions. Human beings are social animals, and the success of virtual pets like Tamagotchis demonstrates that we respond to things that appear to need care and affection. And if an AI takes years to train, a good way to get a human to invest that kind of time is to create an emotional bond between the two.
And that’s what I was really interested in writing about: the kind of emotional relationship might develop between humans and AIs. I don’t mean the affection that people feel for their iPhones or their scrupulously maintained classic cars, because those machines have no desires of their own. It’s only when the other party in the relationship has independent desires that you can really gauge how deep a relationship is. Some pet owners ignore their pets whenever they become inconvenient; some parents do as little for their children as they can get away with; some lovers break up with each other the first time they have a big argument. In all of those cases, the people are unwilling to put effort into the relationship. Having a real relationship, whether with a pet or a child or a lover, requires that you be willing to balance someone else’s wants and needs with your own.
I don’t know if humans will ever have that kind of relationship with AIs, but I feel like this is an area that’s been largely overlooked in science fiction. I’ve read a lot of stories in which people argue that AIs deserve legal rights, but in focusing on the big philosophical question, there’s a mundane reality that these stories gloss over. It’s a bit like how movies show separated lovers overcoming tremendous obstacles to be reunited: that’s wonderfully romantic, but it’s not the whole story when it comes to love; over the long term, love also means working through money problems and picking dirty laundry off the floor. So while achieving legal rights for AIs would clearly be a major milestone, another stage that I think is just as important – and indeed, probably a prerequisite for initiating a legal battle – is for people to put real effort into their individual relationships with AIs.
And even if we don’t care about them having legal rights, there’s still good reason to treat AIs with respect. Think about the pets of neglectful owners, or the lovers who have never stayed with someone longer than a month; are they the pets or lovers you would choose? Think about the kind of people that bad parenting produces; are those the people you want to have as your friends or your employees? No matter what roles we assign AIs, I suspect they will do a better job if, at some point during their development, there were people who cared about them.
—-
The Lifecycle of Software Objects: Amazon (more stock on the way)|Subterranean Press
Read a recent interview of Ted Chiang at Boing Boing.
Sounds good, I’ll have to check this book out! I’m currently working on my PhD in philosophy and cognitive science with a focus on ethics and other kinds of minds (AI, non-human animal intelligences, minimally conscious patients, etc.), so this is right up my alley.
And thank you for pointing out the “but AI won’t have emotions” myth. The idea that emotions are some compartmentalized thing that can be turned on or off, or even is impossible to exist in AI is silly superstition. AI is as capable of emotions as a bunch of electrically-charged cells spitting chemicals at each other.
Thanks for the essay, Ted. This book just keeps getting more and more intriguing.
I’ve been looking forward to Life Cycle since it was announced in April and am eagerly awaiting the mail-man each day!
Hmmm,
The alternate theory, using Science Fiction, is that the AI having emotions is dangerous. I mean, the Cylons nearly destroyed humanity because they felt jealousy over how their creators favored humanity.
Or maybe that’s just an expression of anti-AI bigotry in in Science Fiction (BSG, Terminator)
I think that the speed improvements come not because a given computer can learn faster, but because computer’s knowledge can be frozen, duplicated, and stored far better than human knowledge can, particularly since computer hardware is so much more consistent than human hardware, so there are less instance-specific workarounds needed.
Some training could be accelerated, if a sufficiently realistic simulation can be built. There is no reason a robot couldn’t learn to cook scrambled eggs very quickly in a software kitchen – the key is the quality of the simulation. If we can’t simulate people well, I agree the human interaction tasks will probably have to be learned in real time.
I just read this last night (a friend lent me her ARC) and it was just as wonderful as all of Ted’s stories. I highly recommend it!
One of my favorite poems by Suzette Hayden-Elgin is on this topic.
Ordered!
Thank you, Misters Scalzi and Chiang; I look forward to reading this. Muchly.
I don’t know why you gotta dis Paris like that.
She might be able to solve some Diffy Q just fine if anyone would just ask her.
Seems relevant:
http://gregegan.customer.netspace.net.au/MISC/SINGLETON/Singleton.html
Fascinating!
As with so many others from the Big Idea, I’ve asked my library to buy a copy or three. :)
As for the uniqueness of the idea, it definitely hasn’t been done often. I think Asimov touched on it at least, though. Robby? Short story about a boy and his robot and the emotional attachment between them.
So, is this book about the people who care for the AIs early on? Sounds fascinating if so. Actually, sounds like a job I’d be well-suited for!
As for the premise in real life: I dunno. I think if Paris Hilton’s brain worked a billion times faster than the person asking her about diffy Q, and was connected to the internet, she would have time to look up the answers to any question you asked, or even study up and really learn the principles, since while it would take that useless waste of skin a decade to learn diffy Q, she’d HAVE a decade of internal time to do it before you noticed a delay.
This boils down to the p-zombie problem: if an AI does a perfect imitation of a person really thinking and having feelings, is there a relevant difference between that AI and such a person?
I think not.
In a bit of synchronicity, this article appeared yesterday on ScieneDaily: http://www.sciencedaily.com/releases/2010/08/100809094527.htm
The money quote is:
The robots are capable of expressing anger, fear, sadness, happiness, excitement and pride and will demonstrate very visible distress if the caregiver fails to provide them comfort when confronted by a stressful situation that they cannot cope with or to interact with them when they need it.
“This behaviour is modelled on what a young child does,” said Dr Cañamero. “This is also very similar to the way chimpanzees and other non-human primates develop affective bonds with their caregivers.”
Xopher, if you leave someone alone for a decade, you create a severely mentally disturbed individual; accelerating their internal clock just enables you to do it faster. Note that solitary confinement is widely considered a form of torture. And saying “we’ll let you out once you learn differential equations” isn’t a reliable motivator; if it were, every prisoner would demonstrate good behavior in hopes of early parole, and obviously that doesn’t happen.
So the question is, how can you motivate Paris Hilton to learn differential equations? There may be ways, but I suspect they would require interacting with her, which is not something you can do if you’re running her at high speed.
I think I’ve found my ‘technology article’ for class this week!
Yup, picking up this one. :-) I’ve actually had conversations with people about the idea that robots will need to be raised… Hell, I even have a good start on a story along those lines (although taking it a totally different direction), it’s nice to see someone else with the same idea.
Ted, you’re presupposing that the mind in question starts out just like a human mind. This is sort of true with Paris Hilton (who is technically human), but less so with a computer program. For one thing, it has no biological needs, and no need for contact unless that need is programmed into it; there’s no reason an AI would become disturbed if left alone for a decade of internal time.
In addition to which the AI would be acclimated from the beginning to the time differential. Again, unless you deliberately made it to experience distress (or the equivalent), there’s no reason it would be bothered by this. It simply knows that it has plenty of time to answer questions, and that it has its needs met at appropriate intervals. And if you made it to need human contact (as I assume is done in your book), you could adjust the intervals at which it needs it relative to the speed of its thought any way you like. That’s how you use speed to make it smarter.
This is the main flaw I saw in the otherwise-excellent movie The Truman Show. Truman would not, realistically, find it odd to have people suddenly break into product placement in the middle of a conversation (for example). He’d’ve thought that was normal from being around it since birth.
All that said, again, I like the idea of AIs needing to be raised, and would love to be one of the raisers.
I’ll go one better than Ted: a true sentient machine would necessarily have emotions, because emotions are part of intelligence.
You want it to learn about it’s environment? That’s driven by “seeking” (curiosity). You want it to experiment? Better have a feedback loop that can recognize and respond to errors (shame, or at least distress) and successes (pride). If it interacts with the real world, it will need to avoid miscellaneous dangers (fear…) and probably take action against some of them (…leading to anger).
Some of those dangers will be other people — it will need to learn who it can trust (friendship) and who’s a menace (hate). Some people will be “fundamental” allies, who it needs to defend or otherwise aid. That might be determined by the AI’s inbuilt purpose, or because the AI is intrinsically dependent on them — or the AI might choose to ally with someone who not only has matching goals, but is known to be trustworthy. In any of these cases, that’s love. (It only sounds cynical — think about it.)
In short, emotion represents important state for a creature’s relations with the world around it. Without keeping track of these things, you may have something “clever”, but it doesn’t have “agency”, and it will just get smacked around until it’s destroyed.
Xopher, yes, I was talking about a human being being run at high speeds. You may think this is a straw man, but folks like Vernor Vinge have said that running someone at high speeds would in fact make them superintelligent (he calls it “weak superhumanity”); this idea has gotten a lot of traction among Singularity believers, which is why I felt it worth addressing.
You say that an AI could be designed to not be bothered by isolation. Sure that’s possible, but that wouldn’t be enough; some people would remain unable to solve a problem indefinitely simply because they couldn’t see the mistake they were making; without someone else to point out where they went wrong, they’d never understand why they weren’t getting the right answer. And sure, there might be a way to design around that, too, but that wouldn’t be enough either. There are dozens of differences between the way Paris Hilton’s mind approaches problems and the way Richard Feynman’s did. It’s not at all clear that these are properties that could be easily adjusted in an AI; they might be emergent properties of the system.
Of course, we already have chess-playing programs whose performance improves when you run them on faster computers; they never “get bored” of evaluating moves, they just keep traversing the search tree until we tell them to stop. But is that all we’re looking for from AI?
The first thing that pops into my mind is this, are humans a good model to base AI on? Paris being a perfect example of why I ask. Top this off with the fact that we really don’t understand how human intelligence works. Shoot we don’t even understand how memory works in people.
I forget who it was who said that a human brain may not be the optimal consciousness engine. It’s the best one we know of, but that doesn’t mean it can’t be improved upon.
Actually I think it was Charlie Stross. And Charlie Stross, as you know, strides among mere mortals as a giant among ants.
Claire@11:
Then there’s Lester del Rey’s Helen O’Loy, ca. 1938…
I don’t think the human brain is the optimal consciousness engine. I just think replicating the human brain’s strengths — while a colossally difficult task — will be much easier than improving upon them.
Neither Borders nor B&N has it. :(
Scorpius@3:
I’d think we’ll want our AIs to have emotions, and specifically to recognize us as their progenitors, or at least the bumbling parents they love and want to take care of.
Otherwise, you run the risk of pure logic dictating that they’re the more efficient/valuable/whatever species, and wiping us out. Kind of like the Terminator future.
Claire@11, Frank@22, I would say that a story in the same vein is Rachel Swirsky’s Eros, Philia, Agape, which is one of this year’s Hugo nominees.
http://www.tor.com/stories/2009/03/eros-philia-agape
Xopher@21:
Parahrasing Winston Churchill here- The human brain is the worst consciousness engine there is, except for all of the others we’ve tried. :)
“It’s been over a decade since we built a computer that could defeat the best human chess players”, I’m not sure this is true. The much talked about “Deep Blue vs. Kasparov” was biased. Kasparov had one hour to make a move. But the “Deep Blue Team” was composed of 200 IBM programmers who could stop the program to “tweak” the decision path and then restart it. If I remember well the one hour only applied when the program was running, excluding the long meetings where humans was analyzing the game to tweak the program.
I have yet to hear about a real match where the computer runs a chess program on it’s own, unassisted by any human (or other machine), and won.
TOK: just so.
Quite a thought-provoking article about an important question, Ted; thanks.
In humans, emotions and intelligence have of course evolved alongside each other over the generations. As mentioned above, it’s actually difficult to separate where the ’emotions’ end and the ‘intellect’ begins – though such a separation does match our intuition. E.g., we think of computers as creatures of pretty much pure intellect today.
Now, it seems to me that the ’emotions’ that folks are beginning to program into primitive AI projects are really more about the external signals than the internal state. That is, the internal ’emotional’ state in those systems really exists in order to cause the AI to make emotional communications to humans in the real world. Anger consists of making an angry face that a human would understand, etc. But it’s not really feeling angry, is it?
Another type of AI emotion, though, would be native emotion, if you will, not just for dealing with humans. Could you make an AI that *felt* angry inside, like a human feels? After all, if an AI can be said to “know” that cats are mammals, then surely an AI could be said to “feel” that it fancies cats.
Finally: If we can build AIs that can *feel* things, can we control those feelings artificially? Prevent anger or fear in certain circumstances, cause love in others? Or will the feelings be so natural to the AIs, that they become something that’s really beyond the creator’s control?
I’m looking forward to reading that story; AI is a subject I’ve been studying off and on for the last 40 years, and I’m very interested to hear what Ted Chiang has to say about it.
There’s research going on to develop machines that can detect human emotions, and can generate “body language” that humans can interpret as affect. But simulating affect is not the reason emotion needs to be a vital part of AI research. The key to that need is the word “desire” that Ted used in his post.
I think most investigators in the field will agree now that an AI needs to learn how to interact with the world*, pretty much from the equivalent of human infancy. In order to do that, there has to be an inbuilt drive to learn, both by imitation of perceived behavior, and by experimentation, with reinforcement of successful trials.
Emotions are part of the learning mechanism: pleasure is positive reinforcement, fear is one form of negative reinforcement, as is anger. Without the basic emotional spectrum, learning can’t be the fundamental part of the formation of intelligence that it must be to create a being capable of surviving in a complex world. Additional emotional complexity will be added on as part of the learning process, in the same way that semantic complexity is added by the learning of concepts, and skills are added by the learning of sensory-motor coordination.
It follows that intelligence requires a level of subjectivity; intelligence implies intent** and purpose. Without purpose there is no reason for learning and mental growth to occur, no reason for positive reinforcement to be sought and negative reinforcement to be avoided. The p-zombies that Xopher mentioned are supposed to be precisely beings with intelligence but no intent; the dirty little secret of the philosophy of consciousness is that p-zombies are a completely incoherent concept: no such thing can exist as intelligence without intent. So emotion is a necessary part of the mechanism of intelligence, and a byproduct of its development as well.
* The CYC project has spent more than 25 years trying to specify a rule base for common sense reasoning; I don’t think there’s any reason to believe they’ve been anything close to successful.
** I think the lack of recognition of the importance of intent has been the major failing of AI research.
A bit late to the party, but I find myself thinking about “optimal consciousness engine” and what Peter Watts talked about in “Blindsight” as to whether or why consciousness is worth having. I mean, this may be a fairly stupid question, but it seems as if the general public wants AI so they can talk to a computer without programming it, have it infer things, save the effort of asking for what they want. Meanwhile the applications for which AI–ok, well, machine learning, which seems like a related topic–is currently being used include things like trying to predict protein function from sequence and making it easier for retailers to do data mining. Why is AI useful, when speedy pattern-matching is the task mostly seem to use computers for? Why is consciousness important, if the machine understands you anyway? What does the general public mean when they talk about AI, and how does that clash with what computer science researchers mean when they talk about AI? Some of that is what Ted Chiang is addressing in this collection, I suspect.
Hmm. I should suggest this as a panel topic at a convention. *facepalm*
I for one, absolutely adore our cute widdle Overlords!