Daily Archives: August 10, 2010

The Big Idea: Ted Chiang

We all love our tech — heck, I’m still swooning over that Droid X phone I bought a couple of weeks ago — but do we love our tech the way we love a partner, or a parent, or a child, or even a pet? The answer to this is, probably not, and if we do, there’s probably something a little bit off with us. But as Ted Chaing explains, by way of explaining his latest novella The Lifecycle of Software Objects, if we ever expect to get artificial intelligence that matches our own intelligence, a little love — real love — might be the thing we need to give to our tech.

TED CHIANG:

People routinely attempt to describe the human brain’s capabilities in terms of instructions per second, and then use that as a guidepost for predicting when computers when be as smart as people.  I think this makes about as much sense as judging the brain by the amount of heat it generates.  Imagine if someone were to say, “when we have a computer that runs as hot as a human brain, we will have a computer as smart as a human brain.” We’d laugh at such a claim, but people make similar claims about processing speed and for some reason they get taken seriously.

It’s been over a decade since we built a computer that could defeat the best human chess players, yet we’re nowhere near building a robot that can walk into your kitchen and cook you some scrambled eggs.  It turns out that, unlike chess, navigating the real world is not a problem that can be solved by simply using faster processors and more memory.  There’s more and more evidence that if we want an AI to have common sense, it will have to develop it in the same ways that children do: by imitating others, by trying different things and seeing what works, and most of all by accruing experience.  This means that creating a useful AI won’t just be a matter of programming, although some amazing advances in software will definitely be required; it will also involve many years of training. And the more useful you want it to be, the longer the training will take.

But surely the training can be accelerated somehow, can’t it?  I don’t believe so, or at least not easily.  This seems related to the misconception that a faster computer is a smarter one, but with humans it’s easier to see that speed is not the same thing as intelligence.  Suppose you had a digital simulation of Paris Hilton’s brain; no matter how fast a computer you run her on, she’s never going to understand differential equations.  By the same token, if you run a child at twice normal speed, all you’d get is a child whose attention span has been cut in half, and how useful is that?

But surely the AI will be able to learn faster because it won’t be hampered by emotions, right?  On the contrary, I think creating software that feels emotions will be a necessary step towards creating software that actually thinks, in much the same way that brains capable of emotion are an evolutionary predecessor to brains capable of thought.  But even if it’s possible to separate thinking from feeling, there may be other reasons to give AIs emotions.  Human beings are social animals, and the success of virtual pets like Tamagotchis demonstrates that we respond to things that appear to need care and affection.  And if an AI takes years to train, a good way to get a human to invest that kind of time is to create an emotional bond between the two.

And that’s what I was really interested in writing about: the kind of emotional relationship might develop between humans and AIs.  I don’t mean the affection that people feel for their iPhones or their scrupulously maintained classic cars, because those machines have no desires of their own.  It’s only when the other party in the relationship has independent desires that you can really gauge how deep a relationship is.  Some pet owners ignore their pets whenever they become inconvenient; some parents do as little for their children as they can get away with; some lovers break up with each other the first time they have a big argument.  In all of those cases, the people are unwilling to put effort into the relationship.  Having a real relationship, whether with a pet or a child or a lover, requires that you be willing to balance someone else’s wants and needs with your own.

I don’t know if humans will ever have that kind of relationship with AIs, but I feel like this is an area that’s been largely overlooked in science fiction.  I’ve read a lot of stories in which people argue that AIs deserve legal rights, but in focusing on the big philosophical question, there’s a mundane reality that these stories gloss over.  It’s a bit like how movies show separated lovers overcoming tremendous obstacles to be reunited: that’s wonderfully romantic, but it’s not the whole story when it comes to love; over the long term, love also means working through money problems and picking dirty laundry off the floor.  So while achieving legal rights for AIs would clearly be a major milestone, another stage that I think is just as important – and indeed, probably a prerequisite for initiating a legal battle – is for people to put real effort into their individual relationships with AIs.

And even if we don’t care about them having legal rights, there’s still good reason to treat AIs with respect.  Think about the pets of neglectful owners, or the lovers who have never stayed with someone longer than a month; are they the pets or lovers you would choose?  Think about the kind of people that bad parenting produces; are those the people you want to have as your friends or your employees?  No matter what roles we assign AIs, I suspect they will do a better job if, at some point during their development, there were people who cared about them.

—-

The Lifecycle of Software Objects: Amazon (more stock on the way)|Subterranean Press

Read a recent interview of Ted Chiang at Boing Boing.

Kagan, the SCOTUS, and You!

Apologies for being a bit slow to react on some political stuff that’s happened lately. Moving is hard, moving is hard. There are many boxes; where the hell is my hammer? (With additional apologies to Li Po for butchering his lovely “The Hard Road” for my own sordid purposes.)

So Elena Kagan has been sworn in as the 112th justice of the Supreme Court. I have mixed feelings about this. Much as I’m glad to see another woman — and a New Yorker! — on the court, I’m concerned that Kagan (despite weeks of Republican freakouts over her “extreme liberalism”) is contributing to the court’s drift to the right. The justice she’s replacing, John Paul Stevens, was a solid liberal; Kagan herself appears to be a centrist. While in a vacuum I’d generally consider a centrist to be a good thing, in the context of this court, I can’t see Kagan as much of a counterweight against the hard rightward drag of Scalia and Thomas. Now, I’m not Scalzi and I won’t even pretend to be a centrist myself… but regardless of my personal politics, I think we all need that counterweight right now, given some of the more blatantly partisan decisions the court’s made lately.

So I’m going to try and focus on the positives… and hope that Kagan, like Stevens, turns out to be the surprise liberal the court really needs right now.

Thoughts on Kagan? Remember, Kate’s just itching for her chance to swing the Mallet, so keep it civil please.