An Update On My Thoughts on AI-Generated Art
Posted on December 10, 2022 Posted by John Scalzi 51 Comments

Tor, which is the publisher of my novels, is being called out for using AI-generated art on a book cover; it appears that they got it from a stock art house. Getting graphic elements from stock art to modify on covers is a common enough practice — including on my own most recent novel cover — but the fact stock art houses are now stocking up on AI-generated art (which they then sell, undercutting creators) is, to put it mildly, not great. It’s possible Tor didn’t know (or didn’t pay attention to) the fact the stock art was AI-generated, but that doesn’t make it better, it kind of makes it worse.
So, two things here:
1. I’ll be emphasizing to Tor (and other publishers) that I expect my covers to have art that is 100% human-derived, even if stock art elements are used;
2. For now I’m done with AI art in public settings. As much fun as it has been to play with, the fact it’s already migrating onto “Big Five” covers is troubling, and I think it’s more important to stand with and support visual artists than it is to show off things I’ve generated through prompts on social media.
I think there is probably a way to responsibly use and generate art with AI, which probably includes ways to make sure “training” is opt-in and compensated for, but we’re not there yet, and I’m okay waiting for some additional clarity before I start playing with it again in public.
Update, 12/12/22: A Twitter thread (which points back here in the first tweet, how’s that for recursive), where I do a little bit of expanding on this:
Good morning, Twitter. Over the weekend I made a decision about my displaying AI-generated art (I'm pausing it), which may be of interest to you, and which you may find on my personal site:
I will note that decision comes not because I hate AI-generated art. In fact I dig it, and I've enjoyed the hell out of putting in weird-ass prompts and seeing what comes out. But there are various ethical issues I and a lot of other people need to sort out about its use…
… especially in commercial settings, where AI-generated art has begun to undercut the artists the AI is trained on. For me, that's not good.
Is there an ethical way to develop and use AI art? I'd like to think so. Until I can thread that needle, I'm taking a break.
Again: AI-generated art is amazing! I've loved playing with it. Also, it's okay to say: this is cool AND has unignorable issues, let's address those. Wouldn't it be great if, for once, the development of a tech aligned itself with creators in its field, rather than ignoring them?
In the meantime: Hey, support some artists, okay? It's a good season to do so, art makes excellent gifts this time of year, and elsewhen as well. Thanks.
This has become a thread, so: here, have a cat.

Originally tweeted by John Scalzi (@scalzi) on December 12, 2022.
— JS
I have a lot of respect for your willingness to reexamine your own positions when situations evolve or new information comes to light. It’s not always easy to do that.
Thanks, although I don’t know that I deserve too much credit. Listening to people directly affected by one’s actions is a bare minimum.
So there’s a couple of factors regarding my decision tree regarding AI Art, which I will hence force refer to as Generated Art:
First, clearly there’s a facet of Corporate America that just sees this as another way to devalue human labor. “See, we don’t really need you anymore, so you can’t charge as much now.” I had already heard at least one stock art service was starting to field Generated Art, I hope that they’re at least trying to source it from an ethically trained engine.
Second, on the flip side, is the whole Generated Art problem is engineers thinking they can do a thing, when they should really be thinking should they do a thing. Yeah, it seems like it’s such a nice thing, but once it comes in contact with most people, then it’s just one more “this is why we can’t have nice things”.
At this point, I really don’t even intend to use it behind closed doors myself, because even using it privately is showing that I have no qualms about the ethical violations many of them seem to have been up to, not to even consider the problem of the enabling of unethical behavior in others.
Good for you. Because if we don’t stand for artists and illustrators, that ground is lost for other creatives on the chopping block. (and that would be writers.)
Interesting comments, thanks can’t believe big 5 is on ai already
There are many good things about getting paid but perhaps a more nuanced follow up on “AI and ” let alone AI and Facial Rec. or AI and Art, or narrated videos, or Grammarly, or Auto-Tune is in the offing. The final DE-boarding call from the AI plane was made sometime ago and the passenger door is closed & locked (and no cross check will be made). Buckle up, everyone.
Artists supporting artists- I’m a believer. What is your plan if/when AI achieves self awareness? I suppose thise AI’s will need recompense.
It seems these Generated Art engines need to start from scratch and do it correctly – getting an opt-in, identifying where the art came from and compensating artists.
How to get them to start all over is another question.
Thank you for sharing how your thinking on this is evolving.
I’m still reading lots of commentary and comments, and evaluating how I will adapt my viewpoint and actions, and thoughtful posts like this (and thoughtful comments like this community of readers tends towards) are helpful.
I’m already standing with my artist friends on this. It’s one thing to play with someone’s work as practice. But it is never, ever ok to display it. Ever. Stuff that is in public domain? Have at it. Otherwise, hands off.
Thanks for the update and support of creative artists.
Thank you. I agree with your position.
::How to get them to start all over is another question.::
Lawsuits, like the one writers and photographers launched against Google over “orphaned work”, might be a good place to start….
All I can say is “good for you”! And I hope that more authors hear of your stance and insist on only human-created art on their covers as well.
there is no such thing as “ethically sourced coffee”, just coffee grown by less brutally abused workers…
we are now in the realm of ‘feel good’ advocacy rather than legislating crisply defined regulations… look at the nightmare of privacy — John Scalzi’s adjoining post on GPS-chipped watches just one of zillions of bleak examples — nobody with the political influence slash lobbying budget will cooperate in establish any laws which sharply delineate privacy along with penalties for violations…
stopping “art autonomously generated by trained engine” is no longer possible
if there is a way to cut costs without law breaking then megacorps do it… if they can save more than punishments cost for breaking laws then they dump toxic waste into drinking water, hire children to operate dangerous equipment, ignore copyright, refuse to honor warranties, et al…
just what penalties will it take to prevent ‘generated art’ from becoming the primary sourcing of book covers?
do keep in mind paying a million dollars in fines to continue to poison drinking water is a price megacorps are willing to pay… efforts at consumer boycotts will falter after a month
Photography has long not been seen as a form of art. Too easy, everybody can press a button. We all know how that went. Text to image is just a technology, it does not replace creativity, it does not give you a brilliant artistic idea, it does nothing if you don’t feed it with your inspiration. Like photography it will disrupt, it will displace and it will be controversial for sure. But when the dust settles some will have taken it to a level never seen before which others will call it true art.
I do think it is the right thing to do for the corpus used to train an Art AI bot to use only public domain images, and those when they have acquired the rights to use the images/art.
That said, it is interesting to look at how human artists learn, I am sure that they look at, study, are influenced by and draw inspiration from many sources of art during their lives. While some situations may require compensation (buying a book or painting, paying to look an exhibition), I am sure there are many, many cases where they study other peoples art without compensation either. Maybe that feels different somehow, but I am not sure why…
Tegmark hit the nail on the head for me.
How much did Heinlein “opt in” to training John to write Old Mans War?
The AI generators are just doing what computers and machines have always done. They’re taking a process that used to be entirely manual, and automating it for efficiency. But that’s what the entire scope of history has been for as long as humanity has kept records. We find ways to do things more efficiently.
I work in Educational Publishing and for years now it’s always been, “cheaper, faster, cheaper” when creating publications.
When Photoshop started introducing more tools that could speed up image processing, Publishers started dropping the amounts of money that they were willing to spend on image manipulation. Something that used to take me an hour now takes 45 minutes, and as more iterations of Photoshop develop there’s more automation available, so now 45 minutes becomes “passable in 10 minutes” and Publishers having increasingly become OK with just accepting “passable” rather than doing the job correctly. Faster. Cheaper.
The same goes for photography. Back in the day Publishers would send photographers out to shoot in museums, etc, to get just the right shot for the publication, and photos that were different from the competitor’s publication, but now it’s all stock. And the cheaper the stock the better. “Can we get an amateur’s shot of a public figure for less than what we’d have to pay Getty? Great! Get it.” Now you see 10 publications with the exact same photo.
These are choices that irk me. Publishers make the choice to make a publication as cheaply as possible, but they still insist on maximizing their profit margins. Lower quality work at a higher price!
And don’t even get me started on what they’re willing to pay us Graphic Artists…
I so admire your willingness to revaluate your position publicly! (I frequently ask students when they last changed their minds on a belief or position–if we’re aren’t doing that, we aren’t learning, right?) Thank you for supporting independent artists.
The is a fairly simple way for an artist to leverage AI to make money. The artist takes their art and scans it in to the AI then offers to do customized portraits of anything commissioned. Rather than take weeks or months to do a work, they can turn out original art in and hour or two. In some ways, this is like running prints off your original work.
Because these will be relatively cheap, the artist will probably get a lot of custom requests for art in their style.
Thank you John.
All interesting and thoughtful comments. Thx!
I’m not sure anything can be done to change this situation. I am a musician and advances in both production and distribution have vastly undermined the ability of artists to make a living. AI is now producing entirely adequate music unaided.
We may simply have come to a point when many art forms are no longer economically viable.
Artist is an occupation, why are they getting all these special protections?
It’s getting old seeing people who lack a fundamental understanding of AI models or how they work take strong negative stances against them. “I’ll make SURE the publisher knows not to put any AI artwork on MY bookcovers!” as if he is doing something good, meaningful, or right.
It’s just sad at this point.
Bada:
I care what you think, random internet flyby commenter, why, now?
Dias:
My personal preferences about my cover art constitutes special protection how, exactly?
As a soon to be working Art Therapist, I can imagine potential therapeutic uses for AI art generation, but I’m not going to engage with those at the expense of marginalized creators. There’s a lot to think about, and at the moment it feels a bit like Jurassic Park, where our abilities are outstripping our ethics.
While I disagree about the ethical implications of AI art used in covers, or any other commercial endeavor, I think the parallel to stock art is apt. There’s a lot of mediocre or worse stock art and the use of that in a larger project cheapens it and definitely sends a message about the quality of the project as a whole. I think, ultimately, AI art will be regarded in the same way. Sure, it will “improve” but I think its use will fall from favor due to aesthetic preferences rather than ethical concerns or considerations. Business ethics, after all, is an oxymoron.
One can be in business and still have ethics. A famous French writer, Colette, was told not to charge so much, since another famous writer, X, didn’t charge very highly.
Colette retorted, “She is wrong! If that’s how much X charges, then what about the writers who are starving?”
As for the arts, web essayist Paul Graham once wrote a piece showing that “taste” can be deepened. I guess the trouble with AI producing cheaper “passable” music and visual art is that the rest of us end up with cheaper taste.
In my lifetime taste has changed. I don’t expect a modern hurried reader to get damp eyes reading the old poetry that, late at night, I have found so moving.
Just one more reason UBI is inevitable. I see it as a good thing, instead of millions trying to survive in the rat race of scraping by to make money, maybe we can instead focus on ourselves and each other. Maybe if I’m not an overworked wage slave, I have more time to actually raise and interact with my kids, or my aging parents. Maybe I’ll have more time to focus on my immediate environment. I know it’s tough to let go, but I’m convinced that these are great developments. We’re heading towards a brighter day, and yet everyone keeps thinking things are getting worse completely delusional and anti-historic.
I wonder what you people would think if a new technology made farming so efficient and automated that no one would ever have to pay for food ever again, “but what about the poor farmers, what will they do?” nonsense…
It would be perfectly possible to retroactively compensate the original artists from the training datasets. One model is how rap began paying for sampling songs. Artists who can’t be identified or located can have some payment put into escrow, or a general fund for example. It doesn’t solve the consent problem but it is a start.
I’m going to stick out my head to get it chopped off: I’m somewhat AI-positive myself.
(Aside: in case a 2000 word rant breaks John’s comments section, I’ll post it on my blog, too.)
Yes, that includes writing AI:s like GPT3, too, even though I’m a writer.
The reason for this is twofold:
First, I’m leaning against the lessons from the Bittorrent debacle in the late 90’s and early 00′. A lot of powerful people screaming how music piracy would destroy music, how no musician would be able to afford a living, and we’d all lose out.
What happened?
Well, according to the research I read back in the day (don’t make me quote, I’ve forgotten the authors), the effects were the opposite: instead of music consumption and music production going down, it went up. Both paid and unpaid.
And the big revolution came when legal streaming came into the picture, and music consumption shot through the roof.
In addition to this, the loss to artists wasn’t noticeable. Turns out that people willing to pirate music weren’t willing to pay for it, so no loss there. Indeed, a certain percentage of them became fans, and started buying things by their favorite musicians.
Win – win, right?
Not quite. What happened is that the big studios lost revenue. Some of the big artists lost revenue, too. Turns out that “free” hurts “established” a lot more than it hurts “up-and-coming.”
“Artist” had to adapt, true, but it wasn’t anything new. Movie theater pianists had to adapt when the motion picture soundtrack came (and the pianists weren’t happy about that). But the sound stage didn’t hurt movies, or music in movies. It hurt a certain way things were done. This happens. Mostly, people can adapt.
(For a great example of this, check out the Grateful Dead, who toured incessantly and earned heavy money from swag and fans, rather than having some studio dollop out charity payments for royalties.)
Secondly, we’re already in a market where fandom and personal reputation rule. Yes, there are people who’ll buy anything by Harlequin, but there are a lot more people who’ll buy anything by Nora Roberts, or Stephen King, or James Patterson. That’s not something that any AI can replicate, at least not for a while (actors selling their likenesses to movies studios aside – I’m guessing that IP-move will come back to bite them later.)
What does it mean for artists?
One option is the return of patronage. You write, a few powerful fans pay. Not an ideal situation.
Another option is expansion – more art, more books. It would require platforms to provide an option to cycle covers, the way Facebook can cycle ads. That way, you can design 10-20 covers, then A/B (…X/Y/Z) test them to find the one that draws the most – and possibly even yields the best reviews.
We’re already doing this, somewhat, but it’s piecemeal and requires a lot of effort. If it was available cheaply, you’d be able to target a niche that would yield you your needed Kevin-Kelly 1000 True Fans.
But wait, wouldn’t this still destroy the careers of visual artists if written word artists did this?
Possibly. But possibly there would be a new category of AI-assisted artist able to generate a lot of images in their own style, choose the marketable ones, fix them up, then deliver the experience while preserving the rights to creating derivative, high-quality works for their own fan bases.
This, BTW, is somewhat what happened in the board gaming industry – 30 years ago, nobody knew who the designer of a game was. Today, you talk to board gamer fans, they’ll have definitive fave creators. Reiner Knizia, Alexander Pfister, Jamey Steigmeyer – there are hundreds of brand name creators.
Sure, they’re not rock stars – but they’re a lot more rock star than they were in the 80’s before the German wave of Eurogames became a hit in the US.
The same thing could happen with visual artists, where artisanal covers, and art inside the book, could make up the income of a large mid-list (remember, “power” and “established” are hurt by “free”, “mid-list” and “fan-based” fares a lot better.)
Yet another option is what I hinted to above: going artisanal. Just like you can have a mass-produced poster on your wall, you can have a hand-made painting there, too. Posters didn’t make all painters suddenly stop painting, or stop making money.
Indeed, the high-end art market has exploded, and it’s dragged a lot of the mid-range upward with it. Still, it’s based a lot on fame, but there is a thriving cottage industry of part-time painters today, just as there was in the 1800’s. Sure, you won’t have the Paris Art Expo that would catapult an unknown impressionist to stardom, but hey, most unknown impressionists weren’t catapulted to stardom anyhow…
And if you do get an artisanal movement, with artist’s names being a draw on a cover, you’ll see a lot more opportunity for collaboration – smart artists would choose what books to make art for, building a reputation as a seal of quality – if painter Y drew the cover, then they liked the book, then it’s a good book. If movies stars can do an Oprah’s book club, why not artists?
Would this require more effort on the side of artists? Yes. Would it require so much effort that most artist wouldn’t be able to do it? I think not – looking at Faraway Voices (I believe) where you can do a profit share with a narrator, you already see that narrators choose projects they believe in so why couldn’t artists?
(And yes, I’m aware that some artists already do this, refusing to make covers for books that contain, say, racism, or misogyny, or sexism, or gore, or… Some of them already have a stable of indie writers that they cooperate with.)
So, putting this all together, what does it mean?
It means that things will change. There’s no avoiding that. Nowhere throughout history that I’m aware of has it worked to remove any advancement, law, or technology that has the potential for easy pleasure, power, or money (in a reasonably free society – Japan during the Shogunate is an exception but even there new inventions made their way in before the society collapsed with the Perry Intervention.)
You won’t be able to sit still in the boat and float along. Not if your art is at risk, or is already being used as a prompt, like Greg Rutkowski’s. As of yet, the Greg Rutkowski AI-prompt isn’t nearly as good as the real Greg Rutkowski, and while it’s free in terms of money, it’s being paid for with the user’s time, sort of.
Also, I imagine that we’ll see a lot more trademarks being applied to art, which is a can of worms that will most likely benefit the powerful and legally savvy, stifling new artists rather than encouraging them – if you can’t draw a dragon because Elmore owns the trademark on dragon art, what will you draw then?
(It might not be that bad – I’m not enough of an IP-geek to know. I imagine that dragons would be hard to trademark, as they’ve been drawn for ages, but perhaps you’d be able to trademark a style of dragon, or a style of painting. Banksy comes to mind, here. But IDK.)
But back to the AI discussion. Is it a good thing?
Yes and no.
AI, like any powerful new technology (and make no mistake about it, AI is incredibly powerful) has tremendous potential for abuse. Face recognition, deepfakes, stalking, things we haven’t even realized we can do with AI. And that’s just the tech. You can just as easily use the tech as a pretext to abuse a thousand other things, like trademark law, voting law, anything that will repress those who have little power and favor those who have a lot. Power hates a vacuum, expect people to use the excuse of AI to stay in power.
On our particular level, AI is also bad for the people currently earning their pay drawing book covers, posters, any kind of imagery, especially if they’re replaceable (although we already had that discussion with the race-to-the-bottom gig sites, and prime cover artists still exist, although not as comfortably.) An example: movie background images.
Say that you’re recording a movie, or a commercial, or anything at all, even a TikTok or YouTube vid. On your wall behind the action, there’s a poster. Today, the movie maker has to pay a licensing fee to the copyright holder. Even if it’s out of focus, even if it’s there for only a second or two. There’s a whole industry of IP owners (read trolls) who automatically scan (using AI – oh, the irony) others’ works for IPs that they own, then sue them. Most of these are settled out of court, for low sums (in the thousands or tens of thousands of dollars, depending on how rich the movie maker is,) but enough small streams a grand river doth make (that’s a Swedish saying, BTW.)
That’s likely to disappear if the movie maker can quickly generate a few blobs on the wall in a regular printer.
But again, I’m a firm believer that technological change works against the established and powerful, and for the small and nimble. Already, a lot of indie authors make their own covers. Yes, they pay for stock images, yielding heavy money to the micro-stock sites, and a few pennies to the stock photographers.
I imagine that they’ll use some AI art, and some stock art, and combine the two into wonderful imagery – what if you have a fantasy book with individual images for each chapter heading (like Brandon Sanderson did with some of his books?)
Speaking of Brandon – I expect we’ll see a lot of more mid-listers doing special editions when they can get custom art cheaply. Some of them will do everything themselves, but some of them will outsource their images. And as people get used to collecting special editions, that market will increase.
As of yet, that means going with quality art, meaning an artist. Will AI take over that? Maybe, maybe not. I imagine that for consistency and truly well-fitting art, the AI won’t be able to compete for years yet. We’re still at the stage where Midjourney (one of the most popular art AI:s) consistently draws six to eleven fingers on each hand. If you’re a tech-savvy artist, you’ll use that time to lobby for micro-payments on sources that are used to train AI:s. Either that, or figure out a way to pivot.
Seth Godin did a post on this recently: “It means that creating huge amounts of mediocre material is easier than ever before.” His conclusion is that: “If your work isn’t more useful or insightful or urgent than GPT can create in 12 seconds, don’t interrupt people with it.”
I agree. There’s no end of bad writing, bad reports, bad journalism, bad everything. When you can get away with coasting, some people will. The whole Kindle Unlimited “send-the-user-to-the-back-of-the-book-to-instantly-get-1000-page-reads-and-steal-money-from-real-writers”-debacle is proof of that.
But people get wiser. Companies get wiser. Customers object and if customers object enough, things change.
Will that mean that there will be an “Artisanal”-tag on hand-written and hand-drawn covers and books? Maybe, if enough people request it. If enough readers get burned on a bad AI-generated ending, or a bad AI-generated plot twist, or bad AI-generated art.
Will it mean that the people who will figure out ways to steal from the system, siphoning money away from non AI-work, will get caught and banned? Maybe, if enough money is lost, if enough customers complain.
But to look at this as “our AI-overlords are coming” is to do the whole creativity community a disservice. Right now, there are already tools to combat a cloud of crap: you can’t claim copyright for an AI generated image in the US (the UK is thinking of allowing it, we’ll see how that goes.) You can get sued for trademark infringement if you input the wrong prompt, or the AI spits out the wrong image (try using “Mickey Mouse in power armor” for your cover and see how fast Disney comes knocking…)
There will be people who lose their jobs. Some of those jobs were comfortable. Some of those jobs would have been gone anyhow. Take a look at what happened to typesetters when the first version of Adobe InDesign came. There was a huge outcry at the paper where I worked. Didn’t work. People still got fired. Some of them found different work. Others had to re-school. I don’t know of anyone who starved (although I do live in Sweden – socialized welfare FTW!)
But there will also be people who will find new careers, careers that don’t exist today, that can’t exist today.
And if music is anything to go by, we’ll see some quick, savvy, flexible mid-listers or long-tailers that will start to make a living where they couldn’t before.
So yeah, I’m somewhat AI-positive. I’ve stuck my head out.
Chop away.
Always appreciate you putting yourself out there and trying to look out for people.
I do worry that if we go down the path of ‘opt-in’ for training we are just going to put the best AIs in the hands of the biggest companies who can pay the most for licences. Not to mention the oversized power someone who uses ‘pirated’ training data might access compared to those playing by the rules.
I don’t know if the answer is to just embrace non-consensual use of published works in AI or if there’s another path but I am definitely scared of the results of attempted regulation.
I have a little experience here too coming from biotech where the field spent a decade waiting for and working on tools to get around bans around the definition of ‘GM’, certainly doing harm in the process. Not that things translate early as cleanly as I’m implying…
Whether we regard Mr. Scalzi’s stance as commendable or populist, the idea that we can close the door on AI-generated art strikes me as another version of Don Quixote charging at the windmills.
If publishers only buy from artists, then artists will be the ones using AI to assist them in their creations, just like they use image processing software. And probably that’s the way forward. Surely an artist, assisted or not by an AI, can create better art than an AI alone. And if artists truly can’t add any value to what an AI alone can do… well, maybe it shouldn’t be an economically viable profession, just like other professions have become inviable due to technology.
I mean, we are talking about inspiration and creativity, and how AI’s can’t do those. If that’s true, then that is what artists have to sell. If their book covers look better, there should be a market for them (although, to be frank, many book covers done by human artists right now are not great).
The idea that images used for training AI’s need to be opt in and get paid seems unpractical and a bad idea… Good luck enforcing that, unless the end results are derivative of the original images in a very obvious manner. It’s also a bad precedent, extending copyright protection in troublesome ways: I wonder what our host will say when the estates of classic SF writers he has acknowledged as influences come asking for their compensation.
I admit that the idea of computers making art and maybe competing with human artists is troubling, but once the possibility exists we can’t just close the door. In the long run, artists won’t be able to make a living unless they can add value, just like any other professional.
I have to say that there’s little that exasperates me more than comments about “tilting at windmills” and “horse already out of the barn,” which are basically the commenter talking the long way around to saying “there’s nothing that can be done so you just have to suck it up.” Which shows both a lack of imagination on their part, or perhaps, an unwillingness to entertain anything other than what they see as the path of least resistance.
It’s not “tilting at windmills” to attempt, at this still very early stage of things (remember, this field literally didn’t exist twelve months ago) to build an ethical framework regarding its development and use. This is in fact a really excellent time to do that.
Also, miss me with the idea that how humans learn and process influences and how AI learn and process input are anything close to the same. That is, bluntly, nonsense. What is accurate is that it’s entirely possible copyright is not a great arena for dealing with that issue.
Finally, it does seem to me that there’s a general tendency to think of this issue in a binary way, or at least, to assume that other people are thinking about it in a binary fashion. That’s also inaccurate. I am a veritable poster child for a technology-assisted creator; I started writing along with the advent of the computer and the way I create is entirely turned to that particular sort of technology and what it allows me to do.
The issue is not whether AI-generated art will or will not allow current artists to do their work in different ways, and newer artists to create with AI as their medium. The question is how to do it in a manner that exploits the labor of others in the least ethically questionable way possible, and makes sure that those whose labor has been used to train the AI aren’t just kicked to the curb. If you think that’s not possible, that’s saying something about you, not the tech.
It feels worth brainstorming ways AIs could be trained without harming and ideally while helping the artists whose data went into training them. Watermarking, built in attribution of a works ‘influences’, and licences all seem possible, and I’m sure there is much more we can think of.
What I’m undecided about and unsure how to fix is the ‘workaroundism’ most solutions would promote. In biotech for a long time it was common to initially use techniques that would result in unpermitted organisms, but then take the best results and redo them with techniques that would skirt regulation. In software you often see those ‘clean room engineering’ setups which seem pretty contrived but avoid treading on others licences.
Maybe a regulation that any AIs have to be open access would make it possible for artists to fund groups to dig through Putputs to build cases their clients art was used and extract a fee? This still doesn’t address those that would just keep their use of AI as a trade secret but at least they’d technically be breaking the law and if discovered could be prosecuted?
@ DanielB:
“The idea that images used for training AI’s need to be opt in and get paid seems unpractical and a bad idea… Good luck enforcing that, unless the end results are derivative of the original images in a very obvious manner.”
Why would enforcing it be impractical?
You could require AI developers to keep databases of images used in AI training. These developers would obtain said images by purchasing them from artists. Developers found to breaking copyright laws, i.e. using illegally obtained images to train their AI art programs, will be fined/sued just like any other lawbreakers.
I have to admit I don’t have the first clue about how AIs are programmed, so it’s possible there’s some really obvious piece I’m overlooking. But it seems to make sense to me.
“I wonder what our host will say when the estates of classic SF writers he has acknowledged as influences come asking for their compensation.”
That comparison makes no sense. Human artists may or may not be influenced by the work of others when creating their own art. Writing a program to synthesize trillions of bits and pieces of human artists’ work is not “creating art” – it’s mass-scale plagiarism via computer programming.
I find a lot of AI art interesting, but personally, I’m glad that my abstract canvases are going to be rather difficult to reproduce by machines.
I suspect that in the future, AI art will spur an interest in work that can’t be simulated. Handmade stuffing particular.
Goddammit, not “stuffing” but “stuff in”.
AI is the new Napster (2001)
and the arguments for and against it are nearly identical.
Against: youre pirating my music, selling it without my permission, and keeping all tbe money!
For Napster: dont care, free music! Cool tech! The future is here! Artists will have to adapt! This is the way!
AI’s are copyright dervatives of their training data. Deep learning can be thought of as a weird, non human readable compression and indexing algorithm. It would be like taking a bunch of music and mp3 compressing it. Or taking computer source code and compiling and linkimg it to machine executable code. Both conversions are considered copyright derivatives to the courts.
AI’s are no different. AIs are copyright derivatives of the training data. The design and shape of the neural net itself is like deciding how lossy the mp3 compression will be, or deciding to compile source code to a 8 bit or 32 bit processor. But the gains and offsets that run on the neural net are generated by deriving the training data.
The neural net is like the processor. The gains and offsets you program the neural net with, is like executable code.
If someone trains an ai on copyrighted works without a license, thats blatant piracy. No different than soneone downloading an mp3 without permission.
But then AI companies take it a step further, aggregate all this pirated works into their app, then either give away or sell copies of pirated works. Which violates the copy and distribution rights attached to copyright law.
At which point, ai vacuuming up all these copyrighted works without permission, generating a massive aggregate derived work, then charging people money to download copies of those works, or variations thereof, make todays vidual arts AI websites as bad as the all-pirate-all-the-time Napster website of 2001.
2001 Napster was shut down by massive lawsuits from musicians and record industries. The “napster” name got rebranded under a legit music streaming site Rhaosody in 2016.
Visual artists will probably have to find a way to sue these ai websites into the ground. Is AI scarfing up Disney media without permission? The Mouse might not be happy with that. Class action suit by a collection of individual artists maybe.
And going forward, somebody somewhere us going to have to explain all this to the morons in congress so that this sort of thing can get some regulation. Because detecting AI piracy is going to be a lot harder than detecting copies of napster 2001 on the web. AI is more about derivatives, which mean simple piracy detection methods of looking for exact copies wont work. There is currently no automated way to detect what learning data was used to gemerate an AI. AI holders usually hide their ai behind a website wall and only feed you the specific inage you request.
Somehow, the structure of the ai itself, the neural net layout, depth, width, height, nimber of nodes, topography, needs to be treated as intellectual property of the AI company (because the design of ai topology is hard and gets intellectual property protection to encourage it).. …
BUT…. the training data needs to be verifiable… somehow… to NOT contain a particular work of art, WITHOUT revealing the underlyimg structure of the ai itself…. which is… at least as far as i can tell right now… two entirely diametrically opposed requirements.
As soon as someone figures THAT out (good luck!), we need a law to make sure that its implemented into all ai systems so detection of piracy can be done.
But in the mean time, we are right back in 2001 and the napster website just started offering free pirated copies of other peoples works. And someone needs to drop a hammer/lawsuit on it to get it to stop.
I disagree. A lot of painters lost their jobs when the camera come out. A lot of muscians lost their jobs when the phonograph came out. A lot of horses lost their jobs when when the car and train came out. A lot of scriveners lost their jobs when carbon paper came out. A lot of draftsmen lost their jobs when cad-cam came out. Anything that eliminates jobs is good so that these people can go and do something more human, for example: spend time with their family; worship their gods; exercise and hike and play sports; read books and poems…..
DanielB: images used for training AI’s need to be opt in and get paid seems unpractical and a bad idea… Good luck enforcing that”
Fatman: Why would enforcing it be impractical?
AI operates behind a website and you only see the final product. i.e. AI is usually handled like a trade secret.
One of the most famous trade secrets is the formula for Coca Cola. Been that way for over a century. Hidden in a safe. Only a select few employees know it. Have to sign nda and face firing for revealing it.
So if AI were the secret formula for coke, todays AI websites hide the formula and everything that make it, and the website simply sells a can of coke.
Now imagine if, somehow, it turned out the secret coke formula contained a massive intellectual property violation. You would never know. Tbe only people who inspect the formula work for coke and get fired if tbey go public.
Your only recourse would be to ask coca cola company “are you violating IP law?” They say “no”. And you simply have to take their word for it.
Because all you can access is buying a can of coke from the website. You cant see the secret formula.
“look at how human artists learn, …. Maybe that feels different somehow, but I am not sure why”
1: Human artists usually learn by BUYING the works they want to study
2: human learning is not itself a derived work of what it learned from.
If a human buys someones work, studies them, learns from them, and then starts to creates something inspired by but sufgiciently different from the original work, it doesnt violate copyright law.
4: almost none of this applies to AI because AI takes its training data and creates a derivative work. Human learning is not a detivative work.
Machine learning is a derivative work. Like compiling source code or compressing music into mp3 format.
Human learning is not derivative by itself. Machine learning is inherently a derivative.
@LawfulSideways
The AI owners still have to have the images before they can be fed into the AI, even if the images are collected and grabbed up by automated tools. The training data has to exist independent of the AI before it is used.
If those collections are not destroyed after the images are fed to the AI, could that not be examined and adjudicated? If the collections cannot be produced, deny the AI the right to operate in the marketplace? (Kind of like restrictions against destroying sensitive papers and records.)
I am only talking about tech aspects here. Leaving law and policy, and history, to those more knowledgeable.
@LawfulSideways, I believe you more or less accurately describe current trade secret law. But that is not written in stones handed down from a mountain top. That law was passed in the 70s, and last amended about 5 or 6 years ago. We could add requirements stripping trade secret protections (invalidating NDAs etc) from AIs trained on secret corpuses?
As cool as Coca Cola’s safes are, the real protection is that their lawyers could sue into the ground any competitor who stole trade secrets. Remove that and trade secrets become much less secret.
Filip: “Well, according to the research I read back in the day (don’t make me quote, I’ve forgotten the authors), the effects were the opposite: instead of music consumption and music production going down, it went up. Both paid and unpaid.”
Wow. No. Not even close.
Research by who? Napster?
What happened was piracy went through the roof, legal sales of music plumetted, some huge lawsuits dropped a Mjolnir sized beating on pirate sites, nuked them from orbit, only way to be sure. And streaming didnt pick up again until Apple started a legally legit streaming service that record companies finally came around. And it took over a decade for the legal sale of music to get back to where it was pre-napster.
The argument for the napster-pirates was, neener neener, you cant stop the signal, information wants to be free, stop trying to make money selling copies of music, and plan on all your income coming strictly from tickets for live concert performances.
None of which has any basis in copyright law, the economics of being a musician, or even the basics of how conputers or the internet work. The kind of people behind napster piracy in 2001 would today be pushing crypto currency. Its nothing more thsn “i want it, therefore make it legal”
“Turns out that “free” hurts “established” a lot more than it hurts “up-and-coming.””
Yeah, youre a regular copyright robin hood, stealing from the rich established and giving to the poor up and coming. Except you steal from whoever you can and give money to no one.
“Secondly, we’re already in a market where fandom and personal reputation rule.”
Never have I heard someone so boldly defend piracy on the grounds of “do it for the exposure” as a pirate who doesnt make music or art.
“One option is the return of patronage.”
Its called “work for hire” these days and operates inside ip law. Destroy ip law, and you destroy patronage.
“expansion – more art, more books. It would require platforms to provide an option to cycle covers, the way Facebook can cycle ads. That way, you can design 10-20 covers, then A/B (…X/Y/Z) test them to find the one that draws the most – and possibly even yields the best reviews.”
And then pirates steal it.
Piracy is piracy is piracy. You dont fool pirates by doing some 20 times. They just steal all 20 copies.
“If it was available cheaply, you’d be able to target a niche that would yield you your needed Kevin-Kelly 1000 True Fans.”
So…. your plan is for artists to make art and sell it at such a low price that its cheaper to buy than steal?
Do you even hear what you are saying?
“going artisanal. Just like you can have a mass-produced poster on your wall, you can have a hand-made painting there, too. Posters didn’t make all painters suddenly stop painting, or stop making money.”
So your first solution was to sell art cheaper than it is to steal. And your second solution is to do the most expensive, labor intensive approach of selling custom, one-off, bespoke, artisinal works? Which would take weeks or months or years of labor?
Meanwhile ai can spit out a literal bespoke custom artisinal work, directly from the words of a user, in a few minutes?
THAT is your plan?
” I imagine that we’ll see a lot more trademarks being applied to art, ”
We are at the “if the glove doesnt fit, you must aquit” stage of the presentation.
“I’m not enough of an IP-geek to know.”
You, sir are no IP-geek. There hasnt been one idea you’ve forwarded that remotely reflects the most rudimentary understanding of intellectual property law. The worlds overall understanding of IP law is worse because of your comment. Some unsuspecting AI will gobble it up in tbe future and it will self destruct out of a sense of responsibility to keep these words from being replicated.
“I’m a firm believer that technological change works against the established and powerful, and for the small and nimble. ”
The most powerful AI systems are massively complex systems owned and operated by some of the biggest corporations and governments on the planet. They were trained on gobbling up massive amounts of private data, personal data, and copyrighted works. That only makes the rich richer.
And screaming “priacy!” fixes none if it.
“Already, a lot of indie authors make their own covers.”
If they use AI, they are probably using pirated images gobbled up in training data, meaning actual “little guys” who used to make cover art for a meager living … cant.
“So yeah, I’m somewhat AI-positive. I’ve stuck my head out. Chop away.”
Every word you wrote reflects a complete and total lack of understanding of how intellectual property laws work as well as a complete lack of understanding of the economics driving people to create new IP. Literally every word is wrong.
Your post is indistinguishable from an AI trained on random comments that mention “copyright” and regurgitates a meaningless assemblage of words with zero understanding of any of the underlying topics. If you are not ai, you have managed to fail the turing test.
Long read on some issues relating to AI.
https://scottaaronson.blog/?p=6823
“idea that we can close the door on AI-generated art strikes me as another version of Don Quixote charging at the windmills”
2001 called. They want their “cant stop the Napster” arguments back.
“Surely an artist, assisted or not by an AI, can create better art than an AI alone.”
Yes. But if AI sites immediately gobble it up and sell it themselves, how does that artist make money?
“The idea that images used for training AI’s need to be opt in and get paid seems unpractical and a bad idea… … It’s also a bad precedent, extending copyright protection in troublesome ways”
Computer transformation of works has for decades consistently been interpreted by the courts as a derivative work. Creating derivatives is a right protecred by copyright.
Deep learning is nothing more than a computer algorithm applied to a bunch of works. It is like lossy compression combined with look up table behavior, but it can also interpolate between the data it read. Its not magic. Its not a new independent creation. It is a derivative work.
The idea extends nothing about copyright law. Computer compiling source code into an executable has been considered a derivative for decades.
Laurie: “If those collections are not destroyed after the images are fed to the AI, could that not be examined and adjudicated?”
Yes. But. The training data only ever needs to be used once. So no need to keep it around.
Maybe the law could legally require all training data be kept and logged. But how do you prove thats all of it?
AI consists of the (1) neural network and (2) the gains and offsets of each cell in the network. So you create a network shape/topology and it starts out dumb. Then you run training data through it. What you end up with is gains and offsets (multiplier and addition values) for every node in the network. Thats deep learning.
The thing is, deep learning is a bit of an art and you can go in and tweak some numbers here and there to get better results. You can do this manually. Like, if every gain is a fractional number from zero to one, you might go through every cell and for any gain less than 0.1 you set it to zero. Any value greatee thsn 0.9 you change to a 1.
The ai might act the same overall. Or possibly even better. And its all hand edits, manual changes, maybe a tool you run on the results as an AI engineer.
The problen then is even if you had the entire topology of the network (which would be extreme valuable proprietary data) and even if you had every last bit of training data, you still dont have everything you need to produce the exact same results of gains/offssts that is in the final AI. You would also need any and all edits, changes, and tweeks that any AI engineer applied to the data. And in what order.
Then you could create a blank copy of the network, run the training data through it, apply any manual tweeks, and then maybe the gains/offsets you produce match the ones the corporate ai are running. So it still comes down to trusting the corporation
Thank you for that ‘antidote to all things Monday’ cat picture. :-)
LawfulSideways: “Yes. But. The training data only ever needs to be used once. So no need to keep it around.
Maybe the law could legally require all training data be kept and logged. But how do you prove that’s all of it?”
Of course you can’t prove it.
My suggestion was that if AIs are to be marketed, they should be required to retain the training corpus. Not make it public, but retain it, and produce it if legally required.
In existing, real-life situations of legally required retention of papers or records, can you prove that everything is there that should be? Can you prove that something that should be there is missing? You don’t normally have to make the records public, but every so often, that auditor comes knocking. Am I imagining it, or don’t we have this issue now? Don’t auditors exist, in part, for that very purpose, to address these kinds of issues?
I am not denying the way AI technology vastly complicates this idea. Maybe that’s a new area of forensic analysis — can this given corpus produce that AI, or are there gaps which would indicate missing pieces of corpus?
I am not trying to criticize your descriptions of how the AIs work. It correlates well with my own (decades out of date) experience in building small neural nets. I know that was one of my biggest points of curiosity — how did my neural net end up with that result, given what I fed it?
I’m just trying to think on what else can we do right now to help keep the problem from getting further out of control.