Your Reminder that “AI” Will Just Flat-Out Make Things Up
I asked Bard, Google’s AI, the name my upcoming novel, and also (as a test, because I haven’t publicly announced the title), the title of the novel after that. Bard correctly noted the name of the next novel (Starter Villain) and correctly noted I hadn’t publicly named the novel after that. But then, unprompted, it confabulated the following:
“Here are some details about Starter Villain:
- The book is set in a world where villains are a legitimate career path.
- The main character, Jason, is a young man who dreams of becoming a villain.
- Jason enrolls in a villainy academy, where he learns the skills he needs to be a successful villain.
- Jason’s classmates include a group of misfits and outcasts, who he quickly befriends.
- Jason and his friends must face off against a powerful villain, who is determined to destroy them.”
None of the above is even remotely true. I mean, it sounds like a fun YA book! But that’s not the book I wrote.
Why does Bard think this is what my book’s about? Got me. It’s not like the details of Starter Villain aren’t out there at this point; reviews are starting to come in and we have a “back cover” book synopsis on every online retailer out there. When I queried Bing’s AI about it, it got it right in kinda spoilery detail (don’t worry, Bing’s been wrong about a bunch of stuff too, just not this one thing). There’s no reason Bard should have gotten this wrong, or to have offered it up without additional prompting. It just did.
Now, the thing is that Google and Microsoft and other organizations are really really pushing AI into web search and other information-gathering functions. This is quite evidently a tremendously bad idea at this point because, as you can see above, the information you retrieve cannot be considered in any way reliable. To Google’s credit, it notes this can be the case (its exact wording is “Bard may display inaccurate or offensive information that doesn’t represent Google’s views”), but I wonder how many folks are going to pay attention to the disclaimer.
Getting the details wrong on my upcoming novel is small potatoes; it harms very few — possibly some sad bastard student trying to get an assignment done, or someone thinking of purchasing the book who might later be mildly surprised that the synopsis they were given does not match the book they paid for. But, of course, if Bard is getting this wrong, what else, and what more important than this, is it getting wrong as well? “AI” will become more refined as we go along, but “AI” is not, in fact, intelligent, artificially or otherwise; writer Ted Chiang’s recent notation in the Financial Times that a better description of “AI” is “Applied Statistics” is well-observed. It is not at all clear that “AI” in the future will be able to discern the difference between the factual, the incorrect, and the intentionally misleading, any better than it does today.
I am fortunate in that I am a minorly notable person with a long track record of publication — the easy way for me to check how “AI” is doing on the truth front is to ask it questions about myself and my work and see how much it gets wrong (the answer: evidently, quite a lot). I know it can’t be trusted on that basis. But not everyone can just put their name in, or the name of their book, and then go “well, that’s just crap” when they read the results.
Which is a problem, especially now. Nearly 30 years ago, respected writer, presidential press secretary and former journalist Pierre Salinger plumped for a hoax involving a plane crash because he found the information via the Internet. He was so used to “published” information being vetted and factual that he didn’t quite grasp that the Internet is full of lies and disinformation. Today, I think there will be a whole generation of people, particularly my age and older, so used to the idea that Google and other search engines pull up “correct” information — an idea promoted by Google and other search engine owners, to be sure – that they won’t even question whether the information they’re being offered up has any relation to the truth.
“AI” will make the Internet even less truthful than it is today. It is already doing it.