What’s the Ethical Use of AI-Generated Art?
Posted on November 29, 2022 Posted by John Scalzi 53 Comments
First: The above bit of whimsy, generated by Midjourney from the prompt “A peppermint kaiju in a gingerbread city.”
Second, some thoughts about AI-generated art I’ve had recently, which I posted on my personal Facebook account but am reposting here to open up the discussion a bit. I wrote:
I have fun playing with AI-generated art, and also, as someone who has numerous artist friends, I have qualms and concerns about how their pre-existing art is used for “training” in a way that is both qualitatively and quantitively different from human learning, and how that use impacts artist livelihoods. More specifically with regard to the latter, the question is how does what I do with AI art have an impact on real live artists.
It’s easy to say here “there are no easy answers” but for me there are in fact a couple of easy-ish answers in how I might approach and use AI-generated art in my personal and professional life. They are:
1. I feel it’s all right to use AI-generated art for personal enjoyment and visual inspiration, or to use it in a place where I might use my own art/photos or Creative Commons-licensed art/photos, and where there is no intent to make money from the art, or from what it accompanies (like social media or blog posts).
2. In all other circumstances, and especially when there’s a commercial intent or application, or when I would otherwise hire an artist, I will seek out artists and commission art from them. Likewise, tell art directors/others that my work for them needs to be illustrated/marketed by art from artists, not AI generation (I don’t think I will have to tell them this, not the least because of copyright issues surround AI-generated art, but still).
As a small example: Although I’ve been having a ball generating holiday-related pictures with AI, when it comes to holiday cards, I’ll be commissioning art (or using my own photos) because that’s a time when I would hire an artist or use my own photos. As a larger example, it’s possible in the nearish future I’ll need to collaborate with artists for projects, and there’s no question, for legal, practical and ethical reasons, that AI-generated art is not the way to go for that.
The short version is: Hire artists, and make an intentional, affirmative choice to hire artists (also, pay artists fairly for their work, not just because I can afford it but because also that should just be the baseline assumption).
I think AI art generation is fun. I also think it requires recognition on my part that these images don’t come from nowhere. Sooner or later, they come from artists. I don’t want my fun to hurt the artists I know, or the ones I don’t.
(Update, 12/10/22: Further thoughts on the subject.)
Just one thought, John-
For those of us without the eye and/or camera, who do not have “our own photos”, where is the line in (for example) creating holiday cards?
Is generating an AI image okay for that? Or does one need to purchase a licensed photo from a photo site… even if the available photos don’t meet the specific parameters wanted for the image?
Sooo… Way back when, i.e. about 150 years ago, it became possible for folks to buy tubes of paint, stretched canvases, and factory made brushes, instead of grinding and mixing their own pigments, stretching their own canvases, and making their own brushes. This caused great upheaval and debate in the art world. We’re folks, namely the impressionists, actually artists, if they weren’t mixing their own pigments? Surely No True Artist would use paint made in vast lot jobs, squeezed out of tubes. Where was the artistry, the craftsmanship, the mastery in such sketches, all done with materials made by on assembly lines, by the labor of mere women and children? Is AI Art also simply a tool, or does it replace True Artistry? I don’t have any answers, but I am interested in this newest iteration of What Is Art.
I can’t answer that for you; you have to make your own decisions. My decision is based on my own circumstances and abilities, both in terms of personal talent/interest and means.
That said, with respect to a photographic “eye,” I think a lot of that is just practice. Phone cameras are now (generally) good enough that one can get very good photos from them, and some of my favorite photos have been phone shots. Practice!
I agree with the principle, but what worries me is that the next generation of AIs, having “read” a couple of million novels, will start writing them and that will be it for midlist authors like me.
Although I can comfort myself that I’ll probably be dead before it happens.
Have you heard of Loab? https://www.abc.net.au/news/2022-11-26/loab-age-of-artificial-intelligence-future/101678206
Relevant to AI art, but not necessarily to this conversation
And then we have GitHub Copilot, which is trained on GitHub’s enormous mountain of open source software, the vast majority of it requiring attribution if reused, spitting out unattributed algorithms in maybe the largest instance of software and intellectual property theft on the planet.
What concerns me from an ethics point of view is that the AI systems were trained by processing the work of thousands and thousands of human artists, without any compensation to those artists for providing the raw materials that make the AI training work. Put another way, the AI art cannot exist without the human artists first. It’s not ethical (or legal) for me to photograph an artist’s work and then use it commercially. I’m not sure it’s any more ethical to photograph a large collection of an artist’s work and use a computer to replicate his or her skills.
Did you see this? https://waxy.org/2022/11/invasive-diffusion-how-one-unwilling-illustrator-found-herself-turned-into-an-ai-model/ (I was pointed to it by the EMSL November list o’ links)
The logic of “it’s now technologically possible, so therefore it is inevitable, and therefore it is not immoral to be the first person to do it” was frankly horrifying to me. Do we feel that way about nuclear bombs? Blackmail? Robbery?
Reminds me a bit of the moral/functional difficulties resulting from property ownership record lookups being available online from anywhere and searched using any key, instead of in a physical location and by address. Yes, the information was technically freely available before but if your stalker didn’t even know what city you lived in, they wouldn’t be able to find your address from your name. Now: if you’re in an adequately perilous situation, you cannot safely own a home and I think this is bad. (there are lots of great things that are possible with putting those records online! And maybe the good things balance out the bad. Just, free records lookups by address in person are not functionally equivalent to internet data dumps, nor is every increase in access to data or technology a universally good/safe thing for all reasonably decent people.)
Anyway. I’m not saying that AI art generation is inherently all-evil-all-the-time (any more than genetically-modified organisms are), but making this fully accessible without any oversight or restriction has opened up a number of very different but very problematic cans of worms that societally we are not in any way prepared to deal with. Whee!
Thanks for doing a little bit to help people think so as to try to reduce the ill effects of one of those cans of worms!
The reality is that artists are/will be using AI as a legitimate tool for creating images. Just as technology eliminated the need to use an actual airbrush when a painting program could be used, AI has no moral value.
I was working in the field as word processing replaced typesetting and was replaced by PageMaker and actual paste-up went away along with the process camera and then we had PostScript and all that is now laughably extinct.
You created actual visual artwork. Did those marvelous animated stories you’ve written require the use of hand-painted cells? To those artists, you were the AI behind those stories.
My friend, you might as well be preaching the ethics and etiquette surrounding horse-drawn carriages in the age of automobiles. The dichotomy you are trying to address here ( coldhearted capitalist machine takes money away from diligent craftspeople to churn out inferior product for the masses ) is hardly new and has played out numerous times in history. Invention Z comes out and ethicists rush to defend the practitioners of trade X, which they say the new invention will obviate.
Here’s my take — stop playing Mommy. They’re creative Artists, not laborious coal-miners who will have to be retrained to adapt to the age of Clean Energy. Give them some credit. If there’s anyone uniquely adapted to survive whatever the ever-crazier future throws at us, it is the Artists. They have survived throughout history, and they will adapt and continue to thrive.
AI Art is here to stay. Deal with it. The cat is out of the bag, and I personally refuse, to create a pretend world in which the cat is still in the bag, for the sake of my conscience. Deal with reality!
“I say, evolve, and let the chips fall where they may.” — Tyler Durden.
John just expanding on this thread on visual art. I’m of a generation where the AI generated art we have now was once thought to be a thing for the far future.
Do you anticipate AI generated novels to be a potential thing in your life time?
This is sort of the Abolition of Man argument, precisely: it’s not man’s triumph over nature so much as one man’s triumph over another man with nature as the instrument.
I suspect your phone may be using AI to help take that normal photo into that amazing photo. It just does it so quick that you dont notice it? If it is isnt using AI today, I suspect that it will be soon. Practice helps, but AI’s pattern recognition helps too.
Similarly, how do you feel about a tool like photoshop potentially using an AI based algorithm to either make your photo better or to guide you into making your photo better?
There are no doubts in my mind that if it in any way or shape could infringe on an artist, hire the artist.
In response to Terry’s question – if you don’t trust your own skills as a photographer, look for Public Domain photography or photography under non-restrictive forms of Creative Commons like CC-BY or CC0 (Creative Commons’ version of voluntarily putting work in the public domain):
Currently, I feel that that any commercial use of AI art is largely indefensible, and this is of course due to the models they are trained as you rightly point out.
This carries over to other AI backed tools such as Co-Pilot for code generation/assistance. You are using others work without proper accreditation.
Now where this becomes more nuanced is if/when AI becomes more democratized. When you have an AI which you can train with your own models (i.e. your own art, or your own code), then to me these tools are perfectly acceptable to use.
This is exactly a pipe dream either. In terms of art there is Stable Diffusion https://github.com/CompVis/stable-diffusion) which you can actually train on your own inputs into the model*. The barriers to entry are rather extreme in this case (a video card with 24gb of VRAM), but feasible.
But as it stands with this monolithic gated/blackboxed AI generation systems it is hard to use them for anything more than private enjoyable without – one would hope – skin crawling a little.
I believe it still maintains the core original modelling which you further train by your own inputs. Which would mean there still is the original training to give pause.
^ the above could have used a little proofreading. Apologies
I think this generated art needs to be another brush in the artist’s tool belt. Artists won’t win the battle if they lean on the argument that’s “it’s taking my job!” It needs to be embraced and extended. Artists need to add value instead of trying to extinguish it. This is a wonderful thing humanity has created, and the only winning move, as it expands and gets better, is to improve upon it.
I moderated a panel on this at Philcon about 2 weeks ago–very lively discussion.
Two things really bother me about the various engines out there now.
1) They’re training on art made by living artists without even attempting to get permission. In an ideal world–they’ll only use art in the public domain plus art they had permission to use.
2) They’re not giving credit to artists whose art was used to create the final image. Sort of like musical sampling–surely that could have been built into the algorithms, if anyone had cared to do it.
Once someone starts using these images for paid work (like book covers) there is going to be a class action lawsuit, maybe several, by the artists who work was used.
I love your AI art. It’s lots of fun. Artists need to be paid so that they can make a living.
Your inspiration behind the art (your gingerbread kaiju) is similar to the inspiration behind many artists who are contemplating what they should do next.
I am an artist – an MFA candidate to be exact – and I have been making art using AI since I first figured out how to access GPT-3 last spring. (Now I mostly use Dalle*2 and Wombo Dream) I am not terribly concerned with what is used in the learning data sets mostly because artists study art history and other artists and often use those images as references or influences. I find it to be more problematic when users put in current artist names as text prompts, but even then, no one can copyright a style. If I put in “Van Gogh sunflower,” none of the AIs (as far as I know) are going to spit out a copy of one of Van Gogh’s sunflowers. It will be a sunflower in his style. (I understand the concerns people have, but I don’t share them.)
I think of AI image generators as a tool. I am not sure that the thing that it spits out after 10 seconds is “art” in the way that I am currently thinking about it. But when I recontextualize the images by combining them with other pictures or words, then they become something different that does feel like art to me.
An example: I recently created a book in which I explored the idea of ghosts and AI both being a reflection of the unconscious: singular and collective. I made automatic drawings and facilitated Ouija board seances with people. I also asked GPT-3 to pretend to be a ghost and conducted a “seance” with it. I also fed my drawings into Dream and created “AI Automatic” drawings. By themselves, these images did not really stand up so much to me, but when I put them in a different context, they were actually able to hold real meaning. As an artist, I am trying to figure out what these images mean and how I can use them substantively. (It’s also super fun and I love the jankiness of the images I can create.)
I fear that by using AI-generated art for personal non-commercial use, as you did here, a large part of your audience will just become familiar with the practice, and will take away that at least this one rolemodel of theirs is okay with using AI-generated art – since your more nuanced stance explained in text does not actually jump off the page into their eyeballs like the image does.
If that’s what will happen, that’s mostly on them, not on you – but all the same, it’s a foreseeable consequence which you have the power to stave off by taking a more principled stance here.
To those asking about whether or not an AI can write a novel: yes, and soon. I’ve been working on that for a year or so now, and beyond the basics of “can it string together coherent and/or entertaining phrases?” it is also getting good at predicting structure and even engineering cliffhangers at the end of chapters. It still requires a decent amount of legwork to prime each story, and then to revise the output (a largely interactive process at the moment) but the result is an 80k manuscript in a week, if even.
To me, morality is a distraction in all this, because notions like “I will only hire human artists, not AI” are a lot like “I will only buy at farmers markets, not Walmart” … it’s a noble sentiment, but won’t scale. (note: not dragging you personally, Mr S. Speaking more societally)
The issue is: AI is coming for ALL our jobs, and it won’t be stoppable, purely because of the social good it will ultimately contribute. One side wants AI art and damn the artists, and the other wants AI art banned or regulated into a corner. The path forward needs to be smarter and less simplistic than that. We need to track attribution of influences and handle complex royalty scenarios in ways we’ve been technically able to do for years, but just haven’t, because it’s not as sexy as the ability to edit a tweet. And we need to crack this nut NOW, because AI art is just the start. Music is on the horizon, and literature, and video… if we wait, Amazon will own culture as a whole, profiting endlessly from data-driven drivel, while humans toil away at small markets for small returns.
And I say this as a 100% pure AI advocate. This is a complex subject that can’t be punted down the road.
I feel like AI art is OK for “non-commercial” applications.
I’m using AI art in lieu of not showing anything. Basically, if no one would’ve been paid for their work ANYWAY I’m not taking anything from anyone.
Not that I commission art, but I guess I won’t use AI to make a caricature piece?
I believe that most people have the ethics they can afford, and I’m okay with that. It’s super-offensive when we look at rich people who cannot be bothered to step up to the level of ethics they could afford. I appreciate OGH making that effort.
But what I really came here to say is that your AI gingerbread city does not look all that delicious, but I would totally eat that peppermint kaiju.
Many of the comments fault the AI for having been trained with non-public domain sources, and then creating works (let’s assume for commercial use) with no attribution or compensation for the creators of the input material. But to some extent isn’t that what human artists and writers do? I expect most SF writers started off reading and enjoying a lot of prior SF works. Somehow it feels “different” when it is an algorithm following the same path. Maybe the algorithm can’t be considered “creative” and is thus not really capable of creating an original work? Would be interested in what other think.
@BostonDan: I had a really interesting conversation with someone who felt that an AI learning based on the works of other artists was more acceptable if the AI was stuck in a robot body and had to manually click through webpages to do it. No attribution or compensation, but the physicality of it made a huge difference. There are some really interesting biases at play in all this.
The trick with the training data is that the AI learns even more obtusely than a human does, because it looks at billions of images and uses that experience to know, VAGUELY, that X usually goes together with Y — whereas a human is going to say “oh, this famous artist always does X, so I’m gonna do it too”. The AI is repeating abstract patterns, not copying, while the humans are often consciously echoing their influences.
(this is what makes it hard to compensate artists for their contributions to massive training sets: you’re basically 1 of a billion images, so every single image in the dataset is effectively influencing one billionth of every image generated. Distributing compensation based on those models isn’t going to be worth the effort, and what’s worse, opting out of such models won’t change the overall effectiveness, because, again, you’re just 1 of a billion contributors)
As for creativity: you remember that thing you did in primary school where you drew random scribbles on a page, and then had to turn those scribbles into “art” by finding patterns or shapes to extrapolate from? That’s what the AI is doing: it gets random gibberish, then (based on your request) looks to turn the scribbles into something coherent using the skills it’s learned along the way. When kids do it, it’s unequivocally “creative”, right? When the AI does it, it’s… something else?
Again, weird biases. Very interesting biases, but weird all the same.
I think that if AI art is used commercially, and it was trained off of living artist’s work, those artists deserve compensation. Something like how musicians get compensated for radio and streaming. Sure one use payment is going to be split thousands of ways, but times thousands of uses, and the artists actually would get compensated.
It’s going to take some big commercial use of an AI generated artwork that can be shown to be clearly dependent on an artist’s work, or a visual artists union of some sort being formed to make it happen though.
This image is so cute, I want a story about it now. Maybe King Kong was menacing the Empire State Building again and our beloved Gingerbread Goblin came to the rescue – and then promptly proceeded to take over from old KK.
@BostonDan: I think the difference is that here, ‘training’ is only a metaphor. The machine-learning database is, by any reasonable moral or legal argument, a derivative work created by running the input data through a purely mechanical process.
This is different from a human author learning from existing works, and making creative decisions in that light. It’s closer to photographing a painting, and claiming the picture is now yours.
My husband uses AI art tools for the images he sells online. For him it’s not just a case of typing some random sentence into Midjourney and then selling whatever it produces. He experiments with different phrases, descriptions, etc until Midjourney gets close enough to the image he has in his head. And then he spends ages on Photoshop tinkering with it and fixing all the weirdness etc until the image is exactly what’s in his head. AI is just another tool, like Photoshop.
This is a lot like the discussions that were had when photography was first coming into it’s own. I also heard similar remarks to a degree when applications like Adobe Photoshop, Illustrater, and Corel Painter started being widely used. All these tools made it easier for people to create art that seemed closer to what professional artists produced.
AI can be used to create some very nice art but there are differences between putting in some keywords and having AI spit out an image and the directed approach by the human mind of a professional artist – a lot like the differences between professional and amateur photographers.
The Youtube channels of LUCIDPIXUL (Adam Duff) and Olivio Sarikas discuss this quite a bit and the latter provides numerous examples of using of Midjourney AI as starting points for art.
If history is any indicator, things will be shaken up for awhile, people will come to grips with the new way of doing things, and the dust will settle. Regardless, I expect that for many years, certainly beyond my lifetime, a professional artist will provide better results – art is more than surface appearance.
First, some credentials. I look at this as a (life-long, amateur) artist, a (retired) software engineer, and a (past) AI researcher, which work included neural nets.
Humans have always used tools, and then machines, to create their art. As far as I know, the tools, process, and result is always under the control and specific direction of the artist. That is what I value–the vision of the human artist.
This is not true of AI-generated art. You are not in control of what you get, especially when the training data is unknown to you.
I really don’t like the fact that the works of living, working artists have been pulled into these systems for others to profit off of without acknowledgement or compensation. If the training data sets were limited to images from the commons or public domain it would be different, but we know that’s not the case.
I follow the work of Janelle Shane, at https://www.aiweirdness.com/. She’s been doing a lot recently with the AI image generation. Always interesting, insightful, and often humorous.
In software, you start with source code (created by humans) you run it through a compiler (a converter program) you get executable code (machine readable).
Copyright law recognizes the executable as a derivative of the source code.
Copyright law recognizes compressed versions on music files as derivatives of the original music.
Learning data fed into a neural net is little more tgan converted into machine code and conpressed. Instead of being converted into a linear executable file, it iz converted into gains and offsets for some neural network.
The size and shape of the neural network is a decision made by the AI engineer. Grossly oversimplying into metaphor, selecting the neural net size and shape is like selecting what kind of processor and memory you want to run your executable on.
Learning data is converted into gains and offsets in the neural network. This conversion is like a compiler. And the data can undergo compression or lossy compression. Depending on the network you select and ths training data you use, the network might suffer from a problem called “memorization”. The network has memorized the training data and has a hard time infering or understanding new data that doesnt match the learning data.
Thst is proof that the original data is in the gains and offsets of the neural net.
Which means that learning data is transformed into a derivativw work contained in the neural net.
Someone with an extremely large bank account needs to take this to court and get this distiction recognized by the courts.
They will be running up against massive monied interests who want to steal copyright works, hide them in compressed form in a neural net, and then sell generated inages without paying the original artists.
But there is legal precedent that could be leveraged to protect artists from being consumed by ai companies without compensation.
I’ve been playing with Midjourney for a few months and uploading my favorite images to Redbubble for sale on their merch options – and I’ve also been following the discussions around ethics and copyright, and working on clarifying my own thoughts about it.
I’m trying to be careful to examine my position honestly, since I don’t want to default to “it’s right because I want to do it,” but so far I don’t find the arguments that selling AI-generated images is unethical compelling.
I can understand why some people don’t like the idea, and I can understand why some people would choose to essentially “shop human” the way people often “shop American,” but that in itself doesn’t mean it’s unethical to buy AI-generated art (or to buy products made outside the US).
Instead, I’m leaning towards the opinion that the ethics of the matter lie in how the AIs and their products are used, rather than whether they’re used.
In other words, a use that would be ethical for a hand-drawn image would be ethical for an AI-generated image, and a use that would be unethical for one would also be unethical for the other. (Such as trying to pass off a work in the style of a human artist as a work by that artist, or intentionally selling a recognizable copy of an original work.)
One argument I’ve seen frequently is that if an AI model is trained on copyrighted works, then it’s violating copyright any time it generates something based on that training, but I think that argument is based on a misunderstanding of how both the training and the generation work – and possibly a misunderstanding of copyright law. (This article has an interesting analysis of training sets and copyright law.)
My understanding is that the AI for Midjourney was “trained” by analyzing images presented with increasing levels of noise, all the way up to a field of static, and then using that analysis to take a field of static and refine it in several iterations to develop an image. (I believe latent diffusion is the term for this method.) By connecting the training images to metadata text, the associations with color, shape, proportion, texture, and so on are associated to some degree or another with words.
In generation, entering a new arrangement of words as a prompt evokes those associations, and the AI starts with a seed field of noise and refines it iteratively into a new image. I’m sure this is a vastly oversimplified explanation, but I believe it’s at least a reasonable approximation. Images generated through Midjourney AI are not direct copies of the works the AI was trained on, and neither are they simple collages or mashups of pre-existing images.
To support this understanding, I can say that I’ve done a reverse image search on the images I intend to sell and so far none of them have come up with matches, just “here are some similar images” that are a similar style or color scheme.
I did hear about and replicate one case, where a prompt of “Afghan Girl” consistently generated an image recognizable as a copy of the famous National Geographic photo – but I’m not certain it was a close enough match to violate copyright law. The pose and framing were consistent, but details like facial features, colors, background, and the drape of her hood and hair were more variable. (Also, after mentioning it in the Midjourney feedback forum that phrase was blocked from future prompts, so they are paying attention to this sort of thing.) I’ve also tested out trying to replicate other existing works and failed to do so, so I think this was an edge case, possibly due to a dearth of other images of Afghan girls in the training set. And even in this case it wasn’t a close enough copy that I would mistake it for the original, just close enough to recognize the reference.
Still – the existence of an edge case does mean I think the ethical thing to do is to include a reverse image search into my workflow when I want to publish an image on Redbubble. If I was practicing drawing by making recognizable variations on a copyrighted work, I wouldn’t try to sell my version until I had something clearly different, and the same goes for AI-generated images. On the other hand, I don’t think an argument that the images generated by the AI are infringing copyright holds water, if the images aren’t actual copies of existing works.
So – is the AI training qualitatively any different from a human learning different artistic styles and elements by looking at the work of other artists?
I don’t think so.
Quantitatively, though, there is a difference between a human learner and an AI, since the AIs are able to process a greater volume of reference material much faster than a human learner could – and able to generate a much higher volume of new images. I can see that leading to an argument that the companies training AIs owe some sort of compensation to the owners of the works included in the training set, on the basis of fairness, but I’m not sure what the legal argument would be. (I’m not a lawyer, in case that wasn’t obvious.) I think we’ll likely see the answer to this shake out over the next few years, because the answer isn’t cut and dried. In the meantime, I feel ok with making use of the results of that training.
I do think it’s reasonable to expect people displaying AI-generated images to be open about the fact that they’re AI-generated, like being open about whether a T-shirt was manufactured in the US or China or somewhere else. That way, people who want to make sure they’re supporting human artists when considering a purchase can easily do so, and people who are fine with buying either can do that.
I think it would also be unethical to lie about how an image was generated, but that’s because I believe it is unethical to lie. Based on what I know now, though, I don’t think that using AI-generated images, whether for personal or commercial use, is inherently unethical.
To all the folks wondering about alternatives if you can’t pay an artist or take your own photos, check out Pexels, Pixabay, and Unsplash. All free! Vecteezy has some free vectors and most paid stock photo sites have a small revolving collection of free images. Happy Holiday Card-ing, y’all!
there was a series of technological upgrades which were resisted by the legal profession: sheepskin ==> paper; linen-paper ==> wood-pulp-paper; hand written ==> typed; manually re-typed ==> word processor; paper-based reference works in court setting ==> portable desktop PCs; manual research ==> Nexis-Lexis;
at every transition there were deliberate efforts to delay acceptance by way of attempts to declare contracts produced by ‘new tech’ as being invalid… right now the biggie is ‘slow walk’ filing of any piece of work product from anyone living on USA soil including patent filings, divorce applications, living wills, finalized disbursement of holdings (‘last will & testament’), etc… because law-centric labor performed by Indians or Romanians is 1/5 the cost… just like China did to manufacturing and India did to software, so too lawyers facing a race to the bottom…
now? now it is ever more ‘creative’ professions…
don’t worry, all you technical writers and authors of romantic purple prose and creators of science fiction sagas, when we Silicon-Americans take your jobs we’ll be polite in respecting our progenitors… no matter how obsolete and marginal their contributions…
just do remember to grovel properly when we disperse your daily ration of HumanKibble™
“Images generated through Midjourney AI are not direct copies of the works the AI was trained on, and neither are they simple collages or mashups of pre-existing images.”
If you applied the same approach to software, an executable would not be a derivative of source code, but thats been tbe law for decades.
A few lines of source could might expand into megabytes of seemingly inscrutable numbers. Those numbers mean nothing to a person and only make sense to a particular processor.
Deep learning takes learning data and converts it into gains and offsets (values to multiply by and add to) for the individual cells in a neural net. The numbers are essentially inscrutable to humans. But they act like executable code run on a neural net.
Software gets transformed into an executable. People run the execitable. They then click on things and run data into the executable, and out pops new data.
Maybe its software to do photoediting. The software is copyrighted as a derivative of the source code.
Users can feed inputs and get outputs that are completely inrelated to the software itself. You own the cooyright on you pictures before and after editing them.
But this is where the metaphor breaks down. You take an ai and feed it “gingerbread kaiju” and it spits out something that is THE LEARNING DATA run through an algorithm to compress and convert it. Change the training data, change the output. Because its a derivative of what you feed it.
“To support this understanding, I can say that I’ve done a reverse image search on the images I intend to sell and so far none of them have come up with matches,”
Deep learning is kinda like lossy compression followed by a “fill in the blaks” algorithm. But weirder. Not sure if no hits on a reverse image search is going to prove much.
“, I don’t think an argument that the images generated by the AI are infringing copyright holds water, if the images aren’t actual copies of existing works.”
But copyright law recognizes more than just copies. It recognizes derivatives. That means if you write a new novel about ginger kaiju, you also own rights to sequels of novels YOU HAVENT EVEN WRITTEN yet. If you paint an original picture, you own the rights to that picture as well any pictures you havent even drawn yet.. Copyright is way more expansive than just “copies”.
I would add, one significant problem with regulating ai’s use of copyrighted works is (1) the training data is compressed beyond human recognition and it is compressed along with millions of data. So it would be hard to prove that a particular piece of art is in the ai. (2) ais can easily be hidden behind websites. The owner of the ai doesnt distribute copies of the ai neural net that artists can comb through for instances of their work. The ai itself remains hidden behind a website, users feed it requests, and the website sends back a single output.
Which is funny, because the ai people dont want anyone stealing their neural nets of stolen art.
Ai is going to need new laws to keep widespread theft of art from occurrimg. Otherwisw enforcement of copyright regards to the art will be impossible. Requjre all training data be listed and the neural net be fully mapped? So others can verify the gains offsets used match what is generated with the given data. Except no ai company will want to reveal that. Dunno.
AI is to art in a lot of ways that
Cryptocurrency is to money.
Both involve a lot of weird math.
both enable theft.
Crypto currency creates electronic cash that csn be transferrred onlne and completely untracable. People dont realize how much criminals were naturally limited when fiat cash was their only option.
Ai creates anonymous and completely untracable encyclopedias of copyrighted works that thieves can sell the output of, while hiding the ai itself from inspection behind a website.
Significant new regulations are needed to regulate cryptocurrency to prevent at least some of tbe crime commited with crypto. Significant new regs are needed to regulats ai’s to prevent at least some of the infringement from happening.
Oh one more legal precedent that is going to be interestimg.
People try to defend ai by saying ai is “learning how to make art like any human art student would learn” to justify widespread theft of copyrighted works as trsining data.
A monkey might learn to make art, but todays copyright law is totally specie-ist. It only protects human generated art. So the argument “ai is learning just like human students” is missing the fact that copyright only protects human learning.
Regulating this from a binary moral perspective is going to open the door to a whole flood of unintended consequences. As soon as you start saying “X influenced Y, so therefore Y must seek a license to publish their work” you will have Disney targeting (financially successful) human artists over visual similarities. Adobe is already building tools to track every piece of content you paste into a Photoshop document, so if you paste a swatch for a color reference, you could be on the hook for damages under this “influence” regime.
Human learning isn’t explicitly allowed by copyright, it’s just not disallowed. Start adding regulation to quantify “influence”, and things will get messy very fast for anyone who uses technology as part of their workflow.
But arguing about derivatives (and ignoring the “transformative” aspect) isn’t getting us anywhere closer to a real solution, because any of these AI companies can create a “clean” model based on 100% licensed (or free) content without breaking a sweat. Is it missing a certain style? No problem: pay a starving artist somewhere in the world $500 to feed it 100 images — or just wait 15 minutes and some other starving artist might do it for free.
The morality is a distraction from the real issue, which is 100% practical: how do we keep human art viable in a world filled with analytics-based AI art? Aside from outlawing it, I mean… because with the $$$ at play, there is no way the large content companies are going to let this be outlawed.
@LawfulSideways Thank you for your detailed analysis of how these AI systems work.
As for “learning to make art like any human art student would learn” I feel is a bogus argument.
Part of a student’s learning is copying from prior art. This is a valuable part. It’s also an identified part — you practice by copying, you label it “after by “. Otherwise it’s generally called plagiarism.
But the larger part of a student’s learning is working from original sources — the real world and one’s memory and imagination. An AI’s training data could obviously include photos from the real world, and I’m sure that’s being done in these systems. I don’t think we’re there yet with the memory and imagination components.
My long-winded way to say that I don’t think the AI systems can learn yet like a human art student would.
And I see now an embarrassing markup typo. That should say
… label it “after [original work] by [original artist]”.
We have spent the last century viewing art and music and stories as products we buy, not experiences we create.
Music is a particularly good example – a hundred years ago, 1922, most people learned how to sing to a passable level, hymns in school and folk songs amongst friends. Many people had the ability to play a musical instrument, good enough to accompany a sing-song. Now, most of us only experience music as passive participants, buying recorded performances or studio-based confections from others who sell their work for a living.
Comparatively few of us make art, even though the shared creation of meaningful images and sounds is part of what makes us human.
Art, like music and storytelling, is something each of us can create, regardless of the quality of the output – it’s an expression of self. Instead, artworks and music and stories are commodities we buy. AI-created art isn’t any different.
john: “Human learning isn’t explicitly allowed by copyright, it’s just not disallowed. Start adding regulation to quantify “influence”, and things will get messy very fast for anyone who uses technology as part of their workflow.”
Its not messy. The law has extremely clear bright lines. Humans can learn anything they want. The human mind is not a copyright derivative work of the study materials.
With AI, the AI ITSELF is a copyright derivative.
It is source code compiled and compressed into a machine readable and indexable format.
Copyright doesnt protect data, such as phone books or food recipes either. And the AI is being presented and understood more as data. But ai is data directly from art. Which makes ai a derivative of that art.
If you use AI as part of your workflow, the training data needs tl get the rights to all works used as training data to satisfy derivative works copyright protection.
the AI itself is owned by whoever designed the ar hitecture of the neural net itself, taught it, and checked it is working as expected.
Works created by the ai are all technically potentially legally copyright derivative protections as well. If the AI spits out a near duplicate of a banksey image, banksey coukd conceivably sue.
This means the ai company has to sort out two things with artists it uses for training data.
One is getting a license for the derivative work of the original art converted onto the AI itself. The ai itself is a derivative work of all the training data.
The second issue is dealing with works PRODUCED by the AI. works produced by the AI are derivatives of the AI which is a derivative of the original works.
To deal with that, the people who make and design the AI could put another ai at the output that detects works the ai considers too close to any original to allow be outputted by the main ai.
So if someone asks for a banksy, the second ai would prevent anything to close to an actual original banksy from being outputted to the user.
The alternative is to make sure the license with the original artists somehow gets permission for any and all art that might get generated, including newr photogralhic reproductions.
Imagine if back in the day, the original napster application took a million songs, compressed them into one massive app, and then let users download it without paying the origjnal musicians, and also this app had some features for sampling music and creating derivatives.
That is probably close enough for hand grenades of a metaphor to explain the current state of AI.
AI today is little different than the original piracy based napster.
Now, if you use ai in your workflow, there are already cases involving derivatives being pirated into new works, and the people selling that nee work are sued into oblivion, or pay hefty lawsuit awards.
And extending that to ai would mean the ai company could be sued and anyone who uses an ai someone else built to create xerivative works could ALSO be sued.
Back to the napster/sampler metaphor: people who made napster could be sued and anyone who used napster to create Ice Ice Baby from Queen/Bowie sample could ALSO be sued.
The moment someone cashes in on some disney artwork they generated through AI, the mouse will look and say you fucked around and are about to find out.
See, this is something I’m not sure about in terms of derivative work. If I take a piece of copyrighted art and create my own art based on it, but the effect is “transformative”, then the original artist can’t come after me. Granted, that line is a bit blurry and has been argued over before. But in most jurisdictions, transformative derivatives are OK. If you add something to the original, you can riff as needed.
Contrary to what you’re implying, an AI model isn’t actually just compressing billions of images into a big file and then decompressing it strategically to make new images. A model is a series of references to observable truths, so like “hmm, in many cases, if a human has one eye here, there is probably another one there, and nowhere else” (except even more abstract and granular than that). So none of the original art actually exists in the model itself. The images only exist in the inputs meant to train the model. Put another way: you’re not going to find a Banksy in a model no matter how hard you look, though it will have lots of observations about what makes a Banksy a Banksy.
So then the derivative question applies to the training of the model itself, at which point the question is: if the input is a massive collection of imagery and the output is a model of data-point associations, is that not wildly transformative? Sampling a song in another song is copying; borrowing the same tune is copying; performing a cover is copying … but the training/model process is simply saying “sometimes a note like this follows a note like that, under these circumstances, maybe.” It seems it’s the LEAST directly-derivative approach you can get.
That said, there are definitely dangers in overtraining, where the AI learns a certain concept so well that it accidentally recreates its source material (not by actual “pure chance”, but fairly close to it). And if I were trying to sell generated art, I would definitely want to make sure my outputs weren’t treading on any existing art by mistake. Not sure another AI in the mix would solve that problem without creating a massive and historic index of all visual content, ever. More likely, I would just train my own model with content I knew was safe, and/or make enough adjustments to the output that I could be confident it was “transformed” enough in the traditional sense to not run into any issues.
But again, this is why I think regulating AI (on the copyright front, at least) is not a winning strategy, and is only going to create a system whereby the big players get richer, and the small players have to look over their shoulders 24/7 for fear of accidentally triggering a derivative lawsuit. To loop back to the Napster stuff: it’s like how the DMCA was made to do one thing, but it was quickly weaponized as a censorship tool, often aimed at the artists it’s meant to protect. If we do that here, I’m pretty confident it will backfire horribly.
Just saw this re: AI-generated art:
“commissioning every artist at once, but giving their money to some guy who owns a server farm.”
And, yep. Unless an AI has been trained exclusively on materials in the public domain (which is… none of the current ones?), there you go: this is the problem. It is using peoples’ art – the AI needs their art to train on, or it wouldn’t function at all – and neither giving the artists the option to opt out nor paying them anything.
There are two separate issues tangled here. I think artists should have a right to withhold their art from becoming training data for the AIs. On the other hand, I don’t think there are any ethical considerations with using AI generated art that has been trained on public domain content—or content that has been created and sold for the purpose of training AIs. When I play a CD, you could argue that I’m taking from the livelihood of the live artists whose concert I didn’t go to. But if the artist chooses to give me that permission, I think it’s fine.
You can take your car for a Sunday Drive. Or you can take another car and drive in Formula One, or Daytona.
Both share similar actions.
The Driver is the difference.
You might view AI as a mere curiosity. A toy if you will.
Other find this the most amazing tool for creating Art.
Yes. Some simply use prompts. And endless re-rolls. The best work is all post by digital artists. That is where the originality lives.
Just wait till the AI lawyers get involved.
I’m not sure that point 1 (Personal AI use) and point 2 (commercial AI use) are mutually compatible. If the concern is that AI can take away work, livelihood, or prestige from human artists (or indirectly use their work for AI training), then point 2 makes sense.
But the same bad outcomes arise from personal use of the AI, because your text input is teaching the AI.
Personally, I’m in favor of both personal and commercial use of AI (even though I sympathize with the artists whose livelihoods are affected by AI). But if you think that your individual actions can either help or hurt artists, i think abstinence from AI use is the only practical response.
However, I don’t believe our individual actions will change the underlying issue: massive, rapid technological change has already disrupted commercial art. These changes will continue to upend art. The question is what can you do? Or what can artists do about it?
Prior technological changes (computer art programs, Adobe Illustrator, Corel Draw, etc.) disrupted traditional art production but artists were able to adapt fairly quickly by training in those programs in addition to traditional pen/paper/paintbrush materials.
AI-generated art is an infinitely bigger change, so it gives someone like me with no artistic training the ability to make art comparable to someone with years of training. And that sucks for artists who spent a decade learning how to make art that I can now make with a few hours of text prompt fiddling.
But the good news is that the artist’s decade of experience wasn’t wasted. He/she/they have a huge advantage over me in terms of their editorial ability, knowledge of art styles, and the emotional impact of particular techniques. Not to mention their experience dealing with clients who have no idea what they want in an art project.
The answer in my opinion is: 1. Artists should adapt to the technological change; 2. Governments should create a legal framework to protect against outright copyright infringement; 3. Consumers should make clear to businesses what practices are unacceptable.
The dividing line for me is also whether there’s money in the offing or not. But even that’s a bit of a fraught point.
One example from another field: I’m in New Zealand which has a National Health Service and that makes much of medicine +/- free for people. If want any variety of special treatment, you can go private. But even there, the same procedure that in the US costs $200+, eg doctor’s visit with physical (at least that’s what it cost me in LA), here it’s $30.
What I’m trying to say is that at the margins you’re inevitably competing with free, and that changes the pricing throughout the system.
Artists should be paid fairly, for sure, but fairly needs to be defined as “able to make a living.” And AIs “training” on artwork need to pay yearly royalties forever to the pool of artists they used to learn. The AI work would be impossible without them. So that would be fair. (But all I can hear is a little voice in my head saying, “Good luck with that.”)