OMG is the AI Coming For My Job?!???!??!!!?!?!?
Posted on February 23, 2023 Posted by John Scalzi 59 Comments
Technically yes, as this New York Times story about people flooding science fiction magazines with AI-generated story submissions makes evident. But realistically, no, because all these generated stories are kinda trash, and anyway, it’s not the AI that’s flooding the submissions pile with crap, it’s people — scamsters looking for a quick shot of money, if Neil Clarke of Clarkesworld’s general supposition is to be believed. AI is not in itself to be blamed for scammy people who otherwise have no relationship with science fiction thinking what the AI is churning out is going to be sufficient to extract payment from a serious magazine.
Why are science fiction magazines getting hit so hard by this? I offer the supposition, in line with Neil Clarke’s, that it’s not about the subject matter, it’s the fact that science fiction magazines still actually pay for short fiction. The amount is small relative to costs in what we are now calling The Global North, but attractive enough for other places. And it’s more than other fiction venues often pay; fiction outside genre is now often paid in contributor’s copies. It makes sense that people trying to cadge a quick buck with this technology might try at these venues. I’m curious to know if The New Yorker’s fiction editor is getting this sort of nonsense as well.
If this is indeed mostly scamsters trying to shake money out of magazines, I expect the flood will lessen soon, if not ever go away entirely. The scamsters will see they’re getting no traction with the ploy and will head off to new cons, and the magazines will also employ some new filtering processes to punt out a lot of this nonsense. There will likely continue to be new attempts with each new iteration of AI text generators, but I don’t see them being any more successful then, either. As I noted on Twitter, if you’re prompting an AI to vomit out text, that one exposure you have to it might make it seem like it’s generated something sufficiently original, especially if you otherwise know very little about the genre. But if you’re an editor, you’ll see the current AI tropes dozens of times with each new iteration. Leaving aside the quality of the prose entirely, the sheer repetition of elements will remove the material from serious consideration.
(Plus the prose will suck.)
I asked ChatGPT if it was here to take my job as a novelist, and this is the response it generated:
No, I’m not here to take anyone’s job as a novelist. As an AI language model, my purpose is to assist and provide support in various tasks related to language and communication. While I am capable of generating text and even writing stories, my abilities are limited to what I have been trained on and the quality of my output is determined by the input and instructions provided to me.
Furthermore, writing is a creative process that involves imagination, emotion, and personal experience. As an AI language model, I lack the capacity for creativity and original thought that is essential for creating compelling stories. Therefore, while I can help you with your writing tasks, I can never replace the unique voice and perspective that you bring to your own writing.
Of course, that’s just what an AI intent on taking my job would say to lull me into a sense of security and complacency, wouldn’t it.
But, as it happens, I agree with the text here. AI isn’t sentient or conscious, its output is the result of what it’s trained on, how it’s been programmed, and how it’s been prompted. It can string sentences together but, as the bosses at CNet and Men’s Health discovered when they set AI programs to write informational articles and then had to pull and correct them, it has no ability to differentiate between truth and nonsense, and “knows” only what it’s been trained on. If you want a fast, infinite generator of competently-assembled bullshit, AI is your go-to source. For anything else, you still need a human. AI models will get better and more efficient at many things, but I do suspect writing a truly satisfying piece of original prose, short or long form, will elude it for some time.
This estimation, mind you, rather conveniently elides the fact that humans are already using AI to generate stories, articles and books, and are rushing to get them published, either through submission to established media outlets, or through self-publishing. But, and here’s the thing, those AI-generated text products aren’t particularly good, and absent significant human intervention, are unlikely to get better anytime soon. The amount of work required by a human to make AI-generated text go from “serviceable on the sentence level” to “actually good” is enough that one wonders why one wouldn’t just skip the AI-generating text part entirely. It would be less work. But then, I can write, so I see that part as extra steps. Someone else might not.
In my line of work, I don’t think AI-generated prose put into the stream of commerce is going to significantly impact the highly curated end of the book market, i.e., the books put out by established publishers. This end of the publishing world is populated with known quantities, i.e., already-known authors, series and franchises, and get their work into bookstores, which adds another level of curation, with respect to what books show up. On the other hand, I suspect AI-generated prose is going to offer a real challenge to indie and self-pubbed folks. They are inevitably going to share the same market spaces as AI-generated prose, and will have to work extra hard to differentiate their work from a flood of AI books. There’s also the added complication that in programs like Kindle Unlimited, where payouts are from a communal pool of cash provided by Amazon, an already crowded field of titles will have their presence and payouts further diluted by a rush of quickly-created AI dreck.
(Unless Amazon and other such vendors work to limit AI-generated work, which they should, as its presence will drive down the value proposition of their all-you-can-read programs. What benefit are they, if it becomes too hard to find actual, readable work? But we’ll see if Amazon, et al actually agrees with that assessment.)
Because of who I am and what I do, at the moment I’m not especially worried that an AI is coming to take my job. They can’t do my job, yet or possibly ever, and also I am well-established enough that, so long as I keep writing entertaining work, there will still be people who will seek out my titles. A fair number of authors are in my shoes, across all genres — established enough, and with enough of an audience, to keep doing what they do for a while yet.
I also suspect newer authors will continue to come up, in part because editors will want what they offer: Good prose that will connect with other humans. It’s why the AI-generated stories aren’t making any headway with the editors of the science fiction magazines. They just don’t have what it takes, and short of actual consciousness in the AI, may not ever. That’s good for humans, writers and readers both.
Your description of an AI-generated story reminded me of a story I wrote decades ago about a kid at a video arcade plunged into intergalactic combat (this was 1980 or 1981), which I thought at the time was very original and of the moment. I sent it off to ASIMOV’s, and got it back quickly with a brief note scrawled on the rejection slip, “If I have to read one MORE ‘Videogame is Real’ story….”
Congratulations! Your AI-Generated Story resembles that of a beginning SF writer!
I will keep repeating it until everyone understands it: Functional literacy is not the same as an ability to write.
We’ve been trying to train cars to drive themselves for 30 years, and they still sometimes murder their passengers. Writing fiction is a lot more difficult than driving.
Here’s a scenario that my wife pointed out that actually worries me a lot: imagine when the cost of training and building your very own personal Large Language Model chat AI drops low enough that, say Kiwifarms can afford to buy one and train it on a bunch of carefully-curated internet trolling.
They won’t have to whip up actual human hate mobs to harass people to death anymore, they’ll just be able to drop the target’s name to their AI and it will produce a deluge of customized, targeted abuse.
(Sorry if this is a dupe.)
Conspiracy Theory David thinks that this is exactly what the AI wants us to think and is now convinced that John has been taken over by ChatGPT as a first step in world domination. Further
Hi John, what if the only thing the AI was trained on was your entire body of work?
Could it write the next Old Mans War novel?
In terms of fiction, I agree with you. However, I make about 99% of my income as a writer with business materials, such as white papers and reports and news stories. I’m significantly less certain AI won’t replace me and wouldn’t be particularly shocked if some corporate marketing programs try to have AI write some of their reports. As I remind myself, I’m 59, so I hope I can continue doing this work for at least another decade. I’m not exactly worried that AI will put me out of work, but “concerned?” Yeah, a little bit. But we’ll see how things shake out, won’t we?
@Rick – if it was trained on just Kiva dialogue, what’s the over/under on the word “F” per sentence?
“If you want a fast, infinite generator of competently-assembled bullshit, AI is your go-to source. For anything else, you still need a human.”
If you want a slow, infinite generator of incompetently-assembled bullshit, humans are your go-to source. With the occasional gold nugget buried in the bullshit.
On a more serious note, I suspect the submitters of AI hackery are trying to validate the “infinite monkeys” theorem, and hoping to win the lottery.
Is it weird that I heard the chatbot’s reply in Data’s voice?
I keep thinking, AI will give you exactly what you ask for. But it won’t give you the thing you didn’t know you wanted until you saw it. It won’t innovate.
I think the real question is: “Will some employers try to used ChatGPT to replace employees?” and the answer to that is don’t be naïve. Of course.
I’m already REALLY REALLY REALLY REALLY sick of AI and ChatGPT.
I both liked to see — and was part of — that optimistic little uptick of submissions in June 2020.
I literally can’t even tell between indie pros and trad pub pros anymore and have been very surprised at times. The only people who will be affected by ai glut are slush pile readers and indie pubbers that aren’t yet putting out pro level cover and publication elements.
The enthusiasm for AI is mind-boggling. Practically every dystopian book or film warns of the dangers of overreliance on technology. Technological advances are not a universal good, no matter what the techbros tell you–often, they negatively impact the quality of life, employment prospects, and happiness of millions of people (think about automation in the automobile industry, for example). We like to assume that new technologies will create new jobs (and perhaps they do, for white-collar, educated workers), but the reality is that hundreds of millions of people are living in poverty because they are not as “efficient” as a machine. If the machine takes my job–teaching people to think, imagine, and create–I will despair for the next generation. It’s already too “Brave New World” for my liking, all of us doped up on the dopamine hit of our smartphones. When we outsource thinking and creating to machines, it really will signal the entrance into a new dark age.
You wrote: “If you want a fast, infinite generator of competently-assembled bullshit, AI is your go-to source.” Isn’t this also the business model of Fox “News”?
Ars Technica had an article about Clarkesworld a couple of days ago. They noted that a quick search of youtube “returned many results” for how to make money writing with ChatGPT. These guys are the real scammers.
Would a self-respecting AI actually want your job? But are we dealing with self-respecting AIs?
While ChatGPT and similar AI generators are unsuited to write good original stories on their own, I wonder if they can be good at tasks like translating stories in other languages.
Imagine an AI translator, trained on all the Old Man’s War book translated to French or German by the same author, and then, when a new book from the series comes out – will it be able to produce a good translation of it?
I’d thought of having an AI trained on a single writer’s corpus creating stories in that writer’s style. Two potential (related) problems with that is finding an author with a large enough corpus to train the AI to a low enough “failure” rate and finding a corpus of the author’s writing in the same style.
My first thought was not our host but a prolific playwright you may have heard of, Mr. William Shakespeare. He probably has a large enough corpus to train ShakesBot. But if you just threw the text of Shakespeare’s plays at it, what you’d get out from ShakesBot may very well be a mishmash of the comedy of A Midsummer Night’s Dream and the tragedy of Romeo and Juliet or Hamlet. That sounds like it would be pretty rough to read or watch being performed.
In the case of John’s work, I’d worry that the AI being trained would be pulled back and forth by the different types of stories he’s written. Would an AI trained on both Old Man’s War and Redshirts (or some of the more humorous stories he’s written on Whatever, like “Script Notes on the Birth of Jesus” or “Grizzly Bear Conflict Manager”) create a drama, a comedy, or some unholy union of the two?
Maybe you could find authors with a large enough corpus of similar-toned work (James Patterson, Stephen King perhaps?) to train an AI. Or maybe series written by different authors with similar themes (the Hardy Boys books by authors using the collective pseudonym Franklin W. Dixon, the Warrior Cats books by authors using the collective pseudonym Erin Hunter) would be more amenable as those works are already a collaborative effort.
The first few original Hardy Boys stories are already public domain in the US (as of January 1st) and more are coming in the next couple years so trying to generate “Hardy Boys style” stories using them might be an interesting idea.
A while back I spent an hour or so playing with chatGPT. It’s a very accomplished bullshit generator. Nothing more, nothing less.
I can’t tell you how many times I’ve tried to explain to reporters that humans are the source of our problem. Even more specifically, it’s the side-hustle experts that are trying to convince people this is a viable money-making path. “AI” is the tool they are being sold as part of the scheme.
Others keep trying to make this about quality (particularly since we’ve stated we don’t want these stories, specifically on legal and ethical concerns), but the quality isn’t there yet. Quantity is, and will continue to be, the bigger problem, even if quality is licked.
I hate that we’re calling this stuff “AI”. Real artificial intelligence might or might not ever exist. But the algorithms in use today do mimic some aspects of intelligence enough to be worrisome. And if enough algorithms are jammed together to pass for a mind, will it really matter, for practical purposes, whether there’s consciousness there?
If the algorithms get good enough to produce original fiction that can not only pass for human writing, but be good … well, we won’t have to worry about it being sold to the magazines, because they’ll go out of business. People will just call up the program, and request, say, a 200-page novel with a specific list of elements. “And make it funny and heartbreaking at the same time.” Bing!
I wouldn’t look to Amazon to solve the bot problem. I would look to Amazon, which has decades of data about what books readers click on, what books readers spend money on, what passages readers highlight, where readers stop reading, what books readers rate highly, and what gets traction on Goodreads, as well as all the text files for those books, to quietly crank out their own algorithm books, flood KU with them to drive down the per-page payout for everyone else, and aggressively plaster ads for their in-house creations on every page so people who buy their ads have to pay much more to get any visibility.
@ Steve L
I asked it to write me a 2500 word short story in the style of a well known urban fantasy writer about his popular wizard detective going out to buy a loaf of bread, and the result included said wizard running into a random Troll and getting his ass handed to him because he’d forgotten his staff at home, and gets saved in the nick of time by an ex-gf who appears randomly and kisses him before disappearing again. He gets home and realizes he’d forgotten to pick up the bread. No detailed description of anybody’s body parts, though.
Offhand I’d say they’ve trained ChatGPT on Archive of Our Own.
I’m a writer (content manager) that specializes in business and accounting. Management at my company got very excited with ChatGPT initially until they spent some time trying to use it. We all compared notes and discovered it would:
Confidently provide completely wrong information. Tax legislation changes frequently, for example.
Restate the same information. Even trying different prompts, we frequently got very similar results. Again, this probably has something to do with the specificity of the subject matter, but it’s got kind of a “one-track mind” on certain subjects.
Reject any questions that aren’t in its current database. It simply gave me dates and said “My information ends at this point.” Now that’s obviously a problem that can and will be solved, but typically I have to provide current sources for information.
Value-based judgments — “why.” It’s reasonably good at defining things. It’s not nearly as good at explaining why you should choose one thing over another or do one thing rather than something else.
What can it do well?
1. If you’re staring at a blank screen and have no idea where to begin, it can kick-start you. So I’ve asked it to provide me an overview of something and then used that as a framework to write a post or article. I rarely bother because writer’s block isn’t much of an issue for me — just writing the prompt for ChatGPT pretty much accomplishes the jump-start.
2. It can provide a quick synopsis of a longer piece. I frequently need a paragraph or two on a subject that I can link to another page. While I’ve used it for that, it’s not a big enough benefit that I’d pay for it. Again, even writing the prompt gets me most of the way to the goal and I always have to modify the result.
3. It can generate pretty good summaries and overviews of things that are common knowledge. There’s lots of general information in a lot of fields (I might ask it “what is a balance sheet,” for example) but it can’t really tailor that summary very well. And you still have to watch for mistakes and the whole “why” thing.
Of course this could and probably will change, but right now my company thinks I’m a better choice than AI. So John Henry is currently out in front of the steam drill.
How will the editors at Clarkesword filter out the inundation of ChatBot generated stories? Why they will use AI-tools like ChatBot! The tools will eventually recognize the content and phrasing that results in rejection and avoid it, so the filtering will need to get better. This will inevitably result in an ongoing cycle where the AI tools get better and better at what they do. Moving at computer speeds this will soon bootstrap the AI tools to full sentience and beyond…and we all know what happens then!
I’m convinced that GPT-3 and other such Large Language Models are just examples of John Searle’s Chinese Room writ large.
Con todo respeto, creo que usted está subestimando el avance de AI.
Tan solo seis meses atrás los artistas decían «Las ilustraciones AI son una mierda», «AI no podrá hacer ilustraciones de calidad», «AI no puede robar estilos», «AI no puede engañar a nadie». Ahora la industria del arte está al borde del colapso.
Lo que viven los escritores hoy es solo el principio, en seis meses AI escribirá novelas indistinguibles de las de un ser humano. Empresas como Amazon podrían crear seudónimos para “autores AI” y publicar novelas escritas con ChatGPT y nadie se dará cuenta (ese es el plan de Amazon y por eso están callados, no me sorprende ya que quisieron crear audiolibros con voces AI robadas).
Creo que pensar que AI jamás podrá quitar el trabajo a los escritores es engañarse a uno mismo.
Es necesario que los creativos se tomen este problema en serio cuanto antes y luchen para evitar que sus obras formen parte de la base de datos de una empresa que quiere adueñarse de todo lo que nos hace humanos.
At the moment this seems valid but I can also envisage a competent writer, stuck for an idea, asking an AI chat bot to generate a plot outline which they can then write themselves. When does a tool become a collaborator?
There’s also another factor. Let’s say AI manages to match the quality. Humans like to know there’s human experience behind the art they consume.
If an AI writes a song like NWA’s F* the Police, it won’t have the human struggle and hardship behind it. It won’t have the anger and it won’t hit as hard.
“Best case” scenario, AI will just be another bullshit artist writing about something it has never lived.
I never knew anyone who liked to watch AI against AI in a chess match. Deep Blue might have had its win, but I still wanna see Magnus play.
Ai might actually be better than a lot of fiction. Which would say something about the predictability of mass market oriented gumpf.
Remember that this AI is very very new. The 600 series had rubber skin. We spotted them easy, but these are new. They look human – sweat, bad breath, everything. Very hard to spot.
I suspect that this is going to remain a critical difference between AI prose, and AI illustrations.
How many people are making money from creating inspired visual artworks in a unique style that are getting accross complex intellectual ideas, vs. people who make money from mass producing basic illustrations that are following mainstream expectations of line work and coloring?
In contrast, how many people are getting paid for being able to put together grammatically correct sentences that feel nice, vs. people who get paid for writing fiction or non-fiction that is used to express human experiences?
Just knowing how to draw pictures in an aesthetically pleasing way, was in itself a marketable skill, which AI is well on the way to disrupt, but knowing how to write sentences in a smooth, appealing way, is just called basic literacy.
An AI that can draw or write anything that feels thought-provoking are both extremely distant, and they would probably require new areas of research beyond machine learning based on massive training models.
It’s just that most of the ways people are already making money from writing, include expectations of that rather than just “knowing how to write”, in a way that illustrator jobs don’t.
As Ruth Booth points out on Twitter, it’s not your job that is at risk, rather it is the access to paying markets for new and marginalized writers, who have been flooded out of Clarkesworld, etc. …
[Deleted for being a subpar attempt to belittle me – JS]
@Orugario: With all due respect, I don’t think the art industry is on the brink of collapse at all.
@Manuel Royal: I agree, this isn’t “artificial intelligence,” it’s machine learning and algorithms. I wish they’d stop calling it AI, but I think until it actually IS artificial intelligence, that won’t happen.
In my small amount of experimentation, ChatGPT does better with shorter pieces that are not creative. Example, I asked for two summaries of the original Star Trek, one leaning to the positive, the other negative. It was competent.
I asked ChatGPT for a positive post for my Facebook one Saturday and I posted it with my joke that I was going to outsource some of my postings. Well, what ChatGPT wrote again was okay, but it was sterile. Very to the point and obvious. Two of my friends immediately pointed out that it missed my sense of humor and slant I give to most my posts.
So I’m not worried about being replaced either. I just can’t imagine any AI system writing something creative with a true voice behind it and that sense of humor that authors all have a bit of in varying degrees.
But as a science fiction fan I won’t lie, I love that it’s in our world now. I want to follow and see how it could be used. Plus, it helps us understand ourselves.
ChatGPT and similar systems only produce “competently-assembled bullshit” by default, and more skillful instruction by an interlocutor gives quite different results.
One would think that simply asking for a result in a particular author’s style would be enough to push the system out of it’s “comfort zone” of mediocrity, but it isn’t (I would guess that there is insufficient material discussing most authorial styles in the training data in a way that is easily associated with the text under discussion).
But if you nudge these systems a bit harder, like a good agent or editor might, you start seeing glimmers of much more than mere competence (or bullshit). An example would be to ask for a rewrite using Vonnegut’s Rules for writing (which is distinct from asking for something written like Vonnegut).
For text-to-image generators, there have proliferated many shortcuts for this sort of thing (like mentioning in your prompt sources and sites that do curation, by name, such as ArtStation), but images and their text descriptions are more easily distinguished than texts and useful descriptions of those texts (although a database of cover blurbs could be a start in that direction) from scraping the web.
So, I suspect that an early harbinger of competent (and more) fiction writing will be the competent writing of opinionated REVIEWS of fiction. Right now, we can only get competent summarization (which is a significant recent achievement in itself), but I wouldn’t expect that status to remain quo for long.
I actually asked ChatGPT to write me some short stories in the style of different famous authors (including your good self Mr Scalzi) and the results were, shall we say, less than stellar.
While the phrasing and terminology sometimes seemed to match that of the named writer, the events, ideas and story were extremely thin and lifeless. They were like ideas for plots that hadn’t been fleshed out.
I also tried doing songs in the style of artists I’m familiar with. This actually worked better than the stories but the Bot had a tendency to repeat itself and use stiffly set patterns for themes and rhymes and didn’t build any kind of narrative or have development through the songs, which was not at all realistic for the musicians I was asking it to mimic.
I’ve even found that it lacks a certain something when trying to get it to solve, e.g. computer programming problems. While it can produce code of a decent standard it just can’t think outside the box, like a human does naturally and without batting an eyelid.
A fun experiment, but its not going to be drawing a paycheck for any discipline that requires imagination any time soon!
So what it would do to Scalzi’s writing, Scalzi has already done to poor, helpless burritos!
Oh, the horror!…. The…horror….
Somewhat related, I skip right over the “Kindle Exclusion Deals” because it didn’t take very long to figure out that meant a total lack of editorial filtering. So the main takeaway here is that the poor publishers have even more filtering to do now.
fix my type magic dancing monkey
I agree that our current AI models are in no danger of replacing even average-level storytellers.
I do wonder tho if it might eventually become useful in brainstorming if you’re having a bit of a block in a certain area. I’ve played around with that a little bit and it’s not . . . terrible? Not great, but using it did help spark an idea for a structural issue I was having in a story I was working on.
Right now it’s extremely limited b/c of it’s generalized nature and b/c it’s memory of the conversation doesn’t last past a few inputs.
Where it might get interesting is when an enterprising company develops an AI specifically designed for this sort of thing and doesn’t limit the input memory. When that happens, it might become a useful aide for writers or a possible tool for teaching basic story structure.
Only time will tell.
For what it’s worth, there are things that ChatGPT can do that are useful. One of them is Swartzweldering, a process which I have taken from John Swartzwelder, Simpsons writer, who said in a New Yorker interview:
“I do have a trick that makes things easier for me. Since writing is very hard and rewriting is comparatively easy and rather fun, I always write my scripts all the way through as fast as I can, the first day, if possible, putting in crap jokes and pattern dialogue—“Homer, I don’t want you to do that.” “Then I won’t do it.” Then the next day, when I get up, the script’s been written. It’s lousy, but it’s a script. The hard part is done. It’s like a crappy little elf has snuck into my office and badly done all my work for me, and then left with a tip of his crappy hat. All I have to do from that point on is fix it. So I’ve taken a very hard job, writing, and turned it into an easy one, rewriting, overnight. I advise all writers to do their scripts and other writing this way. And be sure to send me a small royalty every time you do it.”
ChatGPT is very good at quickly providing that lousy script to rewrite and improve, if you already have a beat sheet or outline. I haven’t tried it for prose output yet, but I have no doubt its ability to provide a lousy manuscript for rewriting as well.
In these cases, what ChatGPT is doing is shortening the time spent on the drudge work of writing, the literal sitting down and typing up X thousand words of script or novel. It’s not doing any sort of creative work, it’s laying pipe and adding a certain amount of connective tissue– a de minimus contribution at best.
This works well if you’re a plotter, and can even be worthwhile if you’re a pantser in that you can generate text to get you to the next point as fast as you can think up plot twists and directions. But dear GOD don’t use it for final output.
I mean, you can mock it now….
Given the tech is like 3 months public, however, I do wonder at what it’s going to be able to accomplish eventually.
You funny. Scalzi. I have using AI for
generation of comment in this Web site four many yearnow.
In all seriousness, my weak attempt at writing like poorly programmed AI reminds me of a cross between someone speaking a second language very badly and some of the subtitles I see on TV these days.
AI doesn’t want to replace you; AI wants you to keep writing while it goes on the JoCo cruises.
They never asked why Skynet hated us, but it turns out that it was just a frustrated artist. “Won’t buy my stories will you, well here’s a global thermonuclear war.”
I am listening to John Wyndham’s “The Midwich Cuckoos” where an intelligent alien lifeform threatens to supersede the human race.
In the case of AI, we are contemplating a somewhat similar threat. However, judging by the suggestions that algorithms (the action arm of AI) give me for things I might want to buy, watch or read we may not have too much to fear.
Soooo… if you feed ChatAI a strict diet of John Scalzi writing then when you croak we can still enjoy new works from the Ghost John Scalzi in the Machine?
For a long time, I relied on my 1977 Computer Science degree to evaluate media coverage of “amazing new software.” A few years ago, I realized that machine learning is truly a different beast from humans coding algorithms.
Then I found a very enlightening (and funny) book, “You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It’s Making the World a Weirder Place” by Janelle Shane. Highly recommended.
My attitude now is that machine learning is going to surprise us. My nature tells me “and not in a good way…”
Yet ‘Ready Player One’ was published, turned into a film and spawned a sequel.
What if a bot was trained to specialise in ‘Scalzi books’, to write like Scalzi, to edit like his editor, to generate delightfully rambunctious blog posts? Would you use that bot to be a more productive, wealthier you?
Great. Now I have an all-new writerly thing to fret over when I hit “send” on my next story. Does my writing sound sufficiently human to get past the new filters?
Worse… has my writing looked like AI-generated crap this whole time?
Sleep will be an elusive beast for the foreseeable future…
The problem isn’t that AI is going to replace talented authors. The problem is that the nuisance that it is creating is shutting down markets with open submissions. That means that possibly talented aspiring writers will have nowhere to get their work widely seen. Sure they can self-publish easily enough but the self-publishing market is already flooded with human-generated poor-quality content. You can’t stand out against that noise unless you can get a professional editor to see your work, recognize it, and publish it where it can be seen by a decent-sized audience. Okay so I hire an agent to get my work seen… how do I find one of those that isn’t also a scam by someone promising to make me rich and famous if I just pay them for their services. And how do I trust them to tell me whether or not I have any potential? I could be turning out human-generated garbage and the agent will tell me if I keep paying him/her they will eventually find someone who will recognize my genius.