The Big Idea: Robert St. Amant

I’ve written popular science articles and books, and one of my personal philosophies is that about 80% of any subject can be understood by any ordinary person — if you can manage to explain it correctly. Robert St. Amant has written a book to explain computer science to everyday folks — appropriately entitled Computing for Ordinary Mortals — and in the writing, he found himself confronting the task of making approachable what is often considered an unapproachable field. How did he do it? I will let him tell you this story.

ROBERT ST. AMANT:

When I was ten years old or so, I saw a battered paperback copy of Triplanetary on my grandfather’s bookshelf. I borrowed it… and found myself in ten-year-old heaven. Science fiction led me to popular science, with Isaac Asimov (and Edgar Cayce, embarrassingly enough) to help me cross the boundary. I read about physics, space, biology, math, and psychology. It was formative reading. Today I’m a computer scientist, and I’ve just written my own book.

The big idea in Computing for Ordinary Mortals is that the basics of computer science can be conveyed through stories. Not stories about computers and how we use them, but stories about other kinds of everyday things we do. Computing is more about abstract concepts than about hardware or software, and we can understand these concepts through analogies to what happens in the real world.

For example, imagine you’re shooting a low-budget horror movie, set in a haunted mansion. Unfortunately, you don’t have a mansion, much less a ghost, but you’ve found a couple of big, empty rooms that you can redecorate from one scene to the next, so that in the finished movie they’ll look like different places. You’re taking advantage of the locality principle. Movie-making is a complex activity that needs a lot of space, in theory, but it can be broken down into smaller activities that fit into much smaller spaces and work at different times; each part only needs what’s in its own neighborhood. So you can reuse the space you have, over time. We see the same thing happening when people play half-court basketball or timeshare a vacation apartment.

Analogies like these can be spun out into short-short stories, with characters and a minimalist plot, to make the how and why of computing a little more memorable. Why do computers have caches? How does virtual memory work? Can a gaming environment be infinitely large? “Well, you can think of it as if you’re making a movie…” I’ll skip the detailed explanations to get to the most interesting part–if a story works, it means that we can understand computing through some ordinary experience and the reverse. Real life as computation.

That’s exciting, to me. How hard could it be to write an exciting book full of computer-relevant stories? Hmm. Harder than I’d expected. The explanation part was straightforward, but the stories themselves didn’t come as easily. Eventually, though, I realized that I was writing something close to modern parables or fables, following strict conventions about how a story should unfold (with a bit of science, math, or engineering at the end instead of a moral insight). Most computing concepts are about making sense of problems and how to solve them; I just had to figure out how these problems might arise in an interesting way in some real or imaginary world.

For example, the opening story in a late draft was an Alice in Wonderland pastiche. I liked it, but one reviewer was irritated with the pacing, and another just said, “Alice has to go.” So Wonderland changed into a balloon ride over a coastal town, then became a scientific expedition to Mars, and ended up being a conversation with an alien on a spaceship. I was rewriting the “same” story, in a sense, but that was worthwhile; some stories express a given theme (or analogy, in my case) better than others.

Telling stories in popular science carries some risk. Are the stories true? No–analogies and metaphors are never literally true. Charles Petzold even argues against such story trappings in his excellent book, Code: “Metaphors and similes are wonderful literary devices but they do nothing but obscure the beauty of technology.” My analogies do approach metaphor at times. But I think a better question is whether the stories work, whether they give us appropriate insight. After all, we understand the world around us through stories. If those stories happen to encompass the computers and computations in our modern lives, then so much the better.

—-

Computing for Ordinary Mortals:  Amazon|Barnes & Noble|Indiebound|Powell’s

Visit the book blog. Follow the author on Twitter.

26 Comments on “The Big Idea: Robert St. Amant”

  1. This is a wonderful idea!

    As an IT professional who often has to explain extremely complex, globe-spanning issues to people with different or no technical expertise, I resort to storifying computer issues all the time. The users don’t tune out of a story the way they do if you give them numbers and statistics and usage restrictions. They also remember the specifics of the story better than another stack of numbers. “Remember how you told us that Srinath in Commercial Card did the same thing we did to fix this? We want to do what Bob in Retail did, instead.”

    What’s interesting is that my customers (internal users) love being used in my stories. I have a few that I hold up as the gold standard of good sense, and a few who are, well, cautionary tales. It has encouraged them to talk to each other and build better solutions together. One guy has gone from cautionary tale to gold standard, and another has gone from gold standard to… well, not. Being able to talk in an engaging way about how these things happened has been critical to helping other customers avoid the same pitfalls and embrace the same values.

    tl;dr – I love this idea and I ordered this book. Can’t wait to read it!

  2. @Eridani

    I resort to storifying computer issues all the time.

    Hey, cool. I do this in the classroom as well. Stories let us get at the “why” of computing in a natural way–we face such-and-such a problem, and we have these options for solving it, so here’s what we can do…

  3. This is a very cool concept. And fortunately for me, I’m getting into the intricacies of Web design and programming right now, so this is a timely find. Now if you’ll excuse me, I’ve got an order on Amazon to place…

  4. Oh man, I need to read this book pronto! I teach college freshmen biology, and this kind of translation and explanation of systems is my entire life.

    Glad to see a kindred spirit!

  5. This could be a good Solstice present for my niece.
    Are stories appropriate for a precocious science-geek 13 year-old?

  6. I’m happy to bump into kindred spirits here.

    @Gulliver: As much as I could, I pitched the writing toward adults who read popular science, but I think it would work for a precocious teenager. I’ll be giving a talk at a high school this week, so I expect to find out soon in person.

    You might also look at a couple of recent books for young readers. One is Carlos Bueno’s Lauren Ipsum, which I think of as a blend between The Phantom Tollbooth and Alice in Wonderland; it’s fun. Another is Jeremy Kubica’s Computational Fairy Tales, which has the tagline, “Have you ever thought that computer science should include more dragons and wizards?” Well, who hasn’t? I should say that I’m a fan of Stanislaw Lem’s Cyberiad stories; that may tell you something about why I like these books.

  7. Speaking as a Technical Writer, this sounds like divine reading! I always try to tell stories in my work, especially when writing procedural text.

  8. @ Rob St. Amant: Thanks! I’ll take a look at those. I plan on introducing her to Stanislaw Lem when she’s a few years older, even though she’d probably already think it was all very tame. Kid-geeks these days :)

  9. This sounds my perfect read in combining learning and stories. Thank you, Robert St. Amant! (Also Mr. Scalzi for having it on The Big Idea)
    It’s an area I’m woefully lacking in knowledge and I would so like to know more. I’d buy it right now but it’s not out in the U.K until December. It’s on my Amazon wish list though.

  10. I have ordered it from Amazon. In addition to this book I highly recommend “Physics for Future Presidents” and “Energy for Future Presidents” by Richard Muller as being other books which are written for the tyro and will give the reader enough of a base of info to sound sensible in discussions with friends and make decent decisions in elections – and you will know more than almost any congressman.

  11. Hmm. Curious to see how well this works.

    My day job is IT consulting, and I do training for my consulting company. Stories are an interesting approach to education.

  12. @George William Herbert:

    I’m curious too–hopeful, though. We have good examples of thought experiments in other areas of science, for example: Einstein’s elevator, Schroedinger’s cat, Maxwell’s demon, Searle’s Chinese Room, Putnam’s time-traveling Oscar… That is, all they need is a bit more context, and they become analogical stories.

    All analogies break down at some point. So a storyteller has to be clear about where the points of similarities end. For example, I mention a bad analogy from the NIH: “Your mind works a lot like a computer. Your brain puts information it judges to be important into “files.” When you remember something, you pull up a file.” Aargh. This is just terrible, for reasons you can imagine.

    But I’m a big fan of Daniel Dennett’s concept of an intuition pump, something that helps us recognize the important features of a situation. Stories can give us a hook for our intuitions, when we’re trying to understand something new or unfamiliar.

  13. Your mind works a lot like a computer. Your brain puts information it judges to be important into “files.” When you remember something, you pull up a file.” Aargh. This is just terrible, for reasons you can imagine.

    I suspect that analogy was borne from the probably correct hypothesis that the human brain is a computer in the academic sense, combined with not knowing that there are computers that bear almost no relationship to a binary-based file server.

    http://eetimes.com/electronics-news/4036696/Analog-computer-trumps-Turing-model

  14. @Gulliver

    I think you’re right about where the comparison came from and how they got it wrong. When we’re drawing analogies between the mind and computation, we need to pick good models and abstractions. By analogy, everything comes down to physics in the end, but we still find value in the concepts of chemistry and biology and so forth… There’s still a good deal of argument, mainly among philosophers of artificial intelligence, about whether we’re essentially Turing machines. I’m agnostic, like most of my colleagues, I think.

    (Coincidentally, Siegelmann, whose work is described in your linked article, is now a professor where I did my Ph.D., but she got there a few years after I left, so we’ve never met.)

  15. @ Rob St. Amant

    There’s still a good deal of argument, mainly among philosophers of artificial intelligence, about whether we’re essentially Turing machines. I’m agnostic, like most of my colleagues, I think.

    I spent a decade building and selling enterprise implantations of multi-dimensional data-analysis tools. We may or may not be UTMs, but I fairly certain we’re not computers in the laymen’s sense. I’ve since returned to academia, and my own thesis research is into nonlinear quantum algorithms. I think Penrose’s hypothesis that we may be quantum Turing machines is pretty much disproven by this point as any QM mechanism would have to operate below the threshold where it could directly influence neuronal weightings. There may be quantum chaotic complexity in neural pattern formation, and I’d be somewhat surprised if there isn’t at least a classical chaotic component, but I doubt we’ll have satisfactory research results for a couple decades at least. Like you, I prefer to remain agnostic, though I’m happy to speculate.

    (Coincidentally, Siegelmann, whose work is described in your linked article, is now a professor where I did my Ph.D., but she got there a few years after I left, so we’ve never met.)

    UMass? I’ve been following her work for a while. I think she’d be a seriously cool person to meet.

  16. @Gulliver

    It’s funny when I’m in a conversation with someone and I suddenly realize that they know a lot more about what we’re talking about than I do. I’ve just reached that point. :-)

    For what it’s worth, my less-informed take on Penrose is the same as yours. What I wrote about modeling and such is based on general background reading and some specialized experience with computational cognitive architectures like Soar and ACT-R. They’re very abstract, but I think they’ve given us some good insights into the nature of cognition, even at the symbolic level.

    Yes, I was at UMass from 1991 to 1996. I started out as an AI person but soon gravitated toward cognitive modeling and HCI. I should go back some time; the department has changed a lot.

  17. Excited about this book – will try to pick it up soon.

    As a former technical writer, now e-learning instructional designer, I can tell you that “training via storytelling” is all the rage now, and the reason is because it works.

    I’m a long-time geek, but I have quite a few friends who are digitally challenged, and I wonder what Mr. St. Amant would think of the analogy I’ve come up with:

    A computer (laptop or desktop) is kind of like a stove. Just because you buy a stove doesn’t mean you dinner is ready. You also need pots and pans (i.e., software) and groceries (i.e., raw data and information). And maybe the operating system could be the energy – gas or electricity? All these elements go into cooking a meal.

    I know it can’t be stretched too far, but I think it might help someone get a feel for why you can’t “just plug it in and do something.”

    What do you all think?

  18. @ Rob St. Amant

    My own undergrad was in applied physics (really more of an engineering degree than a proper physics degree) and I’m fast-tracking a Masters/PhD in Quantum Physics/Quantum Computation. Just submitted my graduate thesis for initial review, so it’ll be a few years until I’m a postdoc. And I hold no computer science degrees. I sort of fell into the IT industry after college for financial reasons, but my knowledge is limited to the applications we developed. And I was CTO and CFO, but our lead developers (both PhDs) knew way more about the technology than I did. I’m sure you know considerably more than I about AI and human-computer interaction…and many of my end-user customers would surely agree on the latter :)

    I really like academia. Being good at business does not necessarily equate to enjoying it, particularly in the world of the Beltway Bandits.

    One thing I’ve learned is that mortal minds such as my own can only be an expert in one or two very narrow areas. My interest in cognitive and AI research is decidedly at the amateur level. My real passion is in modelling nonlinear physical systems; QC just happens to be the field that’s going to revolutionize experimental physics over the next century. I originally wanted to study quantum gravity, but that’s gotten too abstract for my tastes and I rather suspect will remain so until we can either string out deep-space particle accelerators megaklicks long or figure out a way to build a really accurate universal quantum simulator – hence my interest in QC. And while governments aren’t exactly lining up to build deep-space accelerators, everyone wants their quantum cryptography yesterday.

  19. @ old aggie

    I know it can’t be stretched too far, but I think it might help someone get a feel for why you can’t “just plug it in and do something.”

    Interestingly, seamless ubiquitous computing is widely considered the holy grail among user interface designers…and almost as elusive (though the late Steve Jobs took a fine whack at it and arguably advanced the state of the art on a practical industrial design level more than anyone else since Douglas Engelbart)

    [sorry for the double-post, John]

  20. Another book in this space is “D is for Digital: What a well-informed person should know about computers and communications”. This is the textbook Brian Kernighan created for a course on “computers for poets” that he taught at Princeton. I purchased it for my technically illiterate high-school son and he has found it informative and enjoyable. The first book that I read by Kernighan was the C programming language specification he created with Ritchie; it’s fun to see how well he can communicate with a different audience.

  21. @ old aggie

    Coincidentally, I’ve sometimes used something like your analogy in my human-computer interaction class. I ask students, “Imagine that the first graphical user interfaces had been targeted at people who work in kitchens in a restaurant rather than businesspeople–what kinds of alternative metaphors do you think might have developed?” And then we talk about organizing data in shelves and drawers rather than in folders; we think about more specialized, less monolithic applications for dealing with data; we imagine more built-in structure in information processing, from preparation to procedures carried out in parallel to final presentation.

    All of this is at the interface, but that’s how most novices begin to understand computing today. So I think that your analogy is moving in the right direction with its account of raw ingredients that need preparation by specialized tools. I like to include “process” in my high-level descriptions of what a computer does, so if I were telling the story it would probably involve an automated kitchencarrying out recipes.

    I think it’s great that you’re introducing your friends to a better understanding of computers, so that they can see them as more than just black boxes.

    @ Gulliver

    Good luck with your dissertation work! It sounds exciting.

    @ David Karger

    I like Kernighan’s new book. He and I cover a lot of the same ground, though we break down the field in different ways. Kernighan covers more of the practice of computing than than I do, and he gives a more detailed snapshot of the state of technology today. For example, he goes in the directon of Abelson, Ledeen, and Lewis’s Blown to Bits, by discussing how widespread digital information has changed society and our lives; for another example, he explains how a file system can be stored in blocks on a disk. I don’t go into data very much in Mortals, concentrating more on what computers do at an abstract level. For example, I explain sequences, trees, and graphs, and how they can be searched or traversed, then I spend just a couple of paragraphs describing a simple file systems as a tree; elsewhere I use the same concepts to explain uniform cost search and A*.

    (I saw you at the Google Faculty Summit this past summer, by the way, but we didn’t have a change to talk.)

  22. As an aside, my company is teaching a class on the LAMP stack to a bunch of young adults through a regional outreach program right now (started the second class, the first one earlier this year went well). I was not involved last time but am helping with office hours or some equivalent this time. We teach a lot by analogy – here, and elsewhere – but will keep this all in mind for the course.

  23. Cool. I don’t know anything about LAMP, but it sounds like a nice platform for exploring some important computing abstractions: modularity and abstraction hierarchies, as well as programming, database, and networking concepts. Fun stuff.

  24. LAMP is more of the default software stack toolkit to expect people to work with now as engineers or sysadmins, but as it has systems, web server, programming, and databases (as well as networking) you touch on all the components. We’re coming at it more from the practical than theoretical / CS side of things, but having that angle on tap to help students dig more is useful.

%d bloggers like this: