In the lawsuit, Jason M. Allen asks a Colorado federal court to reverse the Copyright Office’s decision on his artwork Theatre D’opera Spatialbecause it was an expression of his creativity.
Reuters says the Copyright Office refused to comment on the case while Allen in a statement complains that the office’s decision “put me in a terrible position, with no recourse against others who are blatantly and repeatedly stealing my work.”
He did not make it. He essentially commissioned a machine to create an image for him using millions of pieces of art that were stolen from artists. It's no different from commissioning an artist to draw something for you, except the artist turns out to be someone who traces bits of other people's art, or copy and pastes it, and then you attempt to take credit for it instead by claiming that you made it. I predict that this lawsuit is not going anywhere as he does not have a proverbial leg to stand on.
Edit2: I wrote this in response to the first comment I read but after reading rest of thread I wanted this more visible. I'm not karma whoring and didn't mean to spam the comments posting this twice but the comments here are all engaging as fuck but feel like they're all circling around what im specifically pondering.
So why can't he copy right the prompt which created it? Obviously not being 100% cereal about this specific scenario but in the early days of GPT4 I fed it fucking dissertation length prompt threads writing ridiculously niche and in depth scripted functions. I don't know how to code but used a tool to create something extremely useful for my job. Some of the project took weeks to fully put together.
So what Im really asking is, why would it matter if I used cnc lathes to make something id want copywrited/patented or if I use a LLM to make it? Should it be any less protected because it's taking the "muscle" or "legwork" out of it? Should engineers only design prototypes destine for copywrite/TM/R/patent office if the prototype can be made on manual machines? Again, I kinda understand I went over the top with this but I am fascinated with how the fuck people are guna come up with regulatory frameworks to define the modern age of intellectual property and all the TM/C/R/P drama to follow.
Edit: To expand, the shit I have made using GPT having limited but interested experience with IT work also didnt stike me as anything marketable until I got feedback from vendors and customers I gave it to but from reps that didn't know I made it. It's not the point of me asking I just thought itd help anyone who is guna respond to see that my questions are coming from more of a manufacturing a tool type of understanding rather than the AI toookurjerbs from the suffering artist or musician type of understanding.
The copyright office's policy isn't perfect, but denying copyright to AI slop is probably the best we can expect from the system as it currently exists.
Besides I'm pretty sure you can still use AI in the production of an image and still claim copyright on the final image, just not any of the raw generations.
This is correct. If a painter uses AI to generate a concept and composition, then does a classical oil painting of it on canvass they can claim right to the image of the oil painting.
It's no different than an artist painting a public park or forest. They can't copyright that location, but they can copyright the painting of that location.
So why can't he copy right the prompt which created it? Obviously not being 100% cereal about this specific scenario but in the early days of GPT4 I fed it fucking dissertation length prompt threads writing ridiculously niche and in depth scripted functions. I don't know how to code but used a tool to create something extremely useful for my job. Some of the project took weeks to fully put together.
So what Im really asking is, why would it matter if I used cnc lathes to make something id want copywrited/patented or if I use a LLM to make it? Should it be any less protected because it's taking the "muscle" or "legwork" out of it? Should engineers only design prototypes destine for copywrite/TM/R/patent office if the prototype can be made on manual machines? Again, I kinda understand I went over the top with this but I am fascinated with how the fuck people are guna come up with regulatory frameworks to define the modern age of intellectual property and all the TM/C/R/P drama to follow.
Edit: To expand, the shit I have made using GPT having limited but interested experience with IT work also didnt stike me as anything marketable until I got feedback from vendors and customers I gave it to but from reps that didn't know I made it. It's not the point of me asking I just thought itd help anyone who is guna respond to see that my questions are coming from more of a manufacturing a tool type of understanding rather than the AI toookurjerbs from the suffering artist or musician type of understanding.
its debatable who the artist is, however, because if you remove the ai from the picture he could never have made this, and if you remove the training data the results would also be different.
Realistically: everyone whose data this was trained on should be included as authors if its not just public domain
There were similar debates about photographs and copyright. It was decided photographs can be copyrighted even though the camera does most of the work.
Even when you have copyright on something you don't have protection from fair use. Creativity and being transformative are the two biggest things that give a work greater copyright protection from fair use. They at are also what can give you the greatest protection when claiming fair use.
See the Obama hope poster vs the photograph it was based on. It's to bad they came to an settlement on that one. I'd have loved to see the courts decision.
As far as training data that is clearly a question of fair use. There are a ton of lawsuits about this right now so we will start to see how the courts decide things in the coming years.
I think what is clear is some amount of training and the resulting models fall under fair use. There is also some level of training that probably exceeds fair use.
1 Purpose and character of the use, including whether the use is of a commercial nature or is for nonprofit educational purposes.
This is going to vary a lot from training model to training model.
Nature of the copyrighted work.
Creative works have more protection. So training on a data set of a broad set of photographs is more likely to be fair use than training on a collection of paintings. Factual information is completly protected.
-> Amount and substantiality of the portion used in relation to the copyrighted work as a whole.
I think ai training is safe here. Once trained the ai data set usually doesn't contain the copyrighted works or reproduce them.
Effect of the use upon the potential market for or value of the copyrighted work.
Here is where ai training presumably has the weakest fair use argument.
Courts have to look at all 4 factors and decide on the balance between them. It's going to take years for this to be decided.
Even without ai there are still lots of questions about what is and isn't fair use.
Hmmm. This comment made me realize that these ai images have something in common with collages. If I make a collage, do I have to include all the magazine publishers I used as authors?
Not defending the AI art here. Imo, with image generating models the mechanisms of creation are so far removed from the "artist" prompter that I don't see it any differently than somebody paying an actual artist to paint something with a particular description of what to paint. I guess that could still make them something like a director if they're involved enough? Which is still an artist?
I dunno. I have my opinions on this in a "I know it when I see it" kind of way, but it frustrates me that there isn't an airtight definition of art or artist. All of this is really subjective
It comes down to how transformative the work is. They look at things like how much of the existing work you used and how much creative changes were made.
So grabbing your 9 favorite paintings and putting them in 3x3 grid is not going to give you fair use.
Cutting out sections of faces from different works and stitching them together into a franken face could give you enough for fair use if you made it different enough.
if you make a magazine collage you've already paid all the magazine authors for their work by buying the magazine. I know its not perfect, but at least in a collage situation there is some form of monetary trail going back to the artists.
If the AI company were to license their training data this would be an almost perfect metaphor. But the problem is we've let them weasel in without monetary attribution.
because if you remove the ai from the picture he could never have made this, and if you remove the training data the results would also be different.
How is this different from any other art? Humans are "trained" on a lifetime of art they've observed. Are they to attribute all of their art to those artists as well?
It's a bit more nuanced than that, because a human can still develop artistic skills by observing non-artistic creations beforehand.
For instance, the world's very first artist probably didn't have any paintings or sculptures to build off.
I'm not saying I necessarily agree that the person isn't an artist because they rely on external training data, but generative AI models most certainly need to observe other works to 'learn' how to make art, whereas humans don't necessarily have to. (Although if someone were to make a reinforcement learning model based on user feedback as a way to entirely generate better and better images starting from random variation, that would make the original training data point moot)
The problem is "intellectual property" and capitalism more generally. As technology makes art harder to define and control, the absurdity of violently controlling art will hopefully collapse along with capitalism in general.
Intellectual property as a concept is incompatible with the continued advancement of human knowledge. Before copyright and patenting, we still had trade secrets and sensitive information, and those things cost us insights into metalworking we are still slowly recovering to this day. We still can't figure out how Roman's stumbled upon some of their glass blowing breakthroughs, and we just recently figured out Roman concrete.
Capitalism didn't invent greet, but it's certainly allowed greed to flourish as a core precept of its design.
Machine output cannot be copyrighted. Whether prompt tweaking and the other stuff involved in making AI art is enough for something to not be considered machine output is still to be decided by the courts.
Another thought experiment: If I hire an artist and tell them exactly what they should draw, which style they should use, which colours they should use etc does 100% of the credit go to the artist or am I also partly responsible?
It's nothing like photography. It takes zero special training to feed an AI a prompt. Yes, photographers, who held their camera, who spent years honing their craft, learning the ins and out of the art of photography, who put their bodies in the field to capture real life, yes, they should be able to copyright their work.
Dude just pointed a camera, pressed click and thinks he's an artist? My god what have we become. We could take that train of thought all the way to "if you're not grinding up your own pigments and painting on cave walls you're not really an artist".
AI is a tool. I don't have an issue with someone using AI and calling themselves an artist, as long as they've generated the AI model based on their own previous art. You teach a machine to mimic your brush strokes and color palette and then the machine spits out images as you taught it. I don't see an issue there because you might as well have painted them yourself, it just saves time to have AI do most (if not all) of the work.
Problems arise when the AI is based on someone else's work and you claim the output as yours. Could you have painted the image exactly the same way?
I don’t have an issue with someone using AI and calling themselves an artist, as long as they’ve generated the AI model based on their own previous art.
That's, uh, not what happened here. And I've never heard of anyone doing that. Anyone with the skill to draw the kinds of pictures they want would simply draw the kinds of pictures they want instead of putting in tons of effort to get an AI to do it worse
Yes a photographer is an artist. They need to know light diffusion, locational effects, distance and magnification, aperture, shutter speed, and have a subject prepped and able to take direction. They also have to have an insane understanding of post process editing.
They don’t simply type a sentence into a computer and get beautiful photographs.
A child can produce the exact same image by simply typing the exact same sentence into a computer.
A child cannot be given a camera and be tasked to produce the exact same quality photo of a professional photographer- and succeed.
So stop with this bullshit comparison. It’s apples and oranges.
Firstly, I agree with most of what you've said. However...
Problems arise when the AI is based on someone else’s work and you claim the output as yours. Could you have painted the image exactly the same way?
Is there anything in the world that isn't a derivative of something else? Can you claim to have a thought that isn't influenced by something you've heard, read, seen? Feeding art to AI is no different than a student walking a gallery and learning the styles of the masters. Is the AI better at it? Sure. But it's still doing the same thing. If someone with eidetic memory paints like Picasso, are they not an artist?
To really drive home the point, if I have a friend that is an artist, like, a really good artist, and I ask them to paint something for me, say, a field with wildflowers in the snow, and they come back with something that looks just like Landscape With Snow by Van Gogh, does that mean my friend isn't an artist? If I ask AI for that, and they come back with something like what my friend painted, how is it any different? We call them "learning" models, but we refuse to believe that they "learn". Instead we call it "theft".
You teach a machine to mimic your brush strokes and color palette and then the machine spits out images as you taught it. I don't see an issue there because you might as well have painted them yourself
This artist didn't "teach" the AI anything though. No more than I "teach" my computer something when I do file search using operands like "+" and "-"
Yeah, the joke is that someone thinks they can call themselves an artist by typing a sentence into a prompt on a computer. I get that you’re trying to call me out, but the failure in your joke is that I’m not claiming to be an artist. That douche is.
Imagine thinking this is a salient point, lmfao. “oh, you criticise people writing text prompts on large learning model tools to generate art based on an amalgamation of everyone else’s stolen art, for claiming to be artists, AND YET, here you are writing text.”
it’s so fucking stupid. a work has to be actually creative and novel to be protected by copyright, most AI prompts would not meet the threshold of creativity and originality to benefit from protection.
You have to be the creator of the work in order to copyright it. He didn't create the work. If the wind organized the leaves into a beautiful pattern, he couldn't copyright the leaves either.
Problem is the AI isn't a monkey with a camera, it is an algorothm licensed from a company. The guy basically outsourced the work and tried to copyright the finished product which might be fine depending on the legal agreements and if the AI Company has the rights to it.
but its just a photocopy of the leaves, not the actual leaves. And to photograph something, you capture it according to your will. What will be the light situation, from which angle, at what focal length,... so many options.
You can copyright a combination of words, though, and it was his unique combination that created the art. The artist doesn't copyright the palette, and the shop that sold the pigments holds no ownership over the painting. If the art is created with paint, pixels, or phrase, the final product belongs to the artist, and so should be protected by law for them.
In this case they’re not “fixing” their words and the final art is the created expression. Yet in this case their created expression wasn’t created by them but the program.
In this case their combination is the palette and paint but the program “interpreted” and so fixed it.
For example you can’t copyright a simple and common saying. Nor something factual like a phone book. Likewise you can’t copyright recipes. There has to be a “creative” component by a human. And courts have ruled that AI generated content doesn’t meet that threshold.
That’s not to say that creating the right prompt isn’t an “art” (as in skill and technique) and there is a lot of work in getting them to work right. Likewise there’s a lot of work in compiling recipes, organizing them, etc. but even then only the “design” part of the arrangement of the facts, and excluding the factual content, can be copyrighted.
You can copyright a combination of words, though, and it was his unique combination that created the art
so its literature, then?
The artist doesn’t copyright the palette, and the shop that sold the pigments holds no ownership over the painting.
Sure, the artist doesn't copyright a palette, or the shop does not hold ownership of pigments. But Companies do patent pigments.
If the art is created with paint, pixels, or phrase, the final product belongs to the artist, and so should be protected by law for them.
If you commission an Art piece, with a detailed description of what it should display. The artist comes back to you with a draft, you tell them to adjust here and there, and you finally after several rounds of drafting got the commissioned art piece. Did you draw it?
And that's why I make art completely without instruction or man made tools. I actually independently developed cellphones and English purely to dunk on people on the internet.
Super interesting. The guy claims is wasn't just ai, that he performed alterations as well. If that's true but he still gets shot down it might pave the way for AI being much more shunned in the world out of IP concerns on the output side rather than the training data.
You can't copyright that music, game, book, screenplay or video because AI made some contribution.
Ah, I remember this image. It received some kind of award or something and created a stir when it was revealed to be AI gen. I can see why that would be incentive to want copyright.
I play with AI image generation all the time. No way do I see that as my work, there’s no skill other than positive and negative prompts, maybe feeding it a a starter image set or something.
Where it might be more concerning is if you use AI gen to create an 2D example of something, then an artist creates a 3D physical representation of the thing. Who owns it? AI famously is not good at creating “whole” things, but one can certainly interpret that image to make a whole of it.
I play with AI image generation all the time. No way do I see that as my work, there’s no skill other than positive and negative prompts, maybe feeding it a a starter image set or something.
I play with image editing software all the time. There's no skill other than adding or changing marks, maybe using a reference or something.
I guess no one owns it, it's like in the common societies mind or something, especially if it's an ai model trained on free stuff found in said society.
Like a chair; paint it, draw it, build it, cannot copyright it but you can't make an exact copy of someones fresh creation either.
This hasn't been reported on much, but I actually checked what that "competition" really was, back when the image won the prize. It was some local festival in Bumfucknowhere, USA, which among various other events (sport events, food tasting, that sort of stuff) included an art competition. I doubt the jury was made up of highly experienced art critics.
And besides, people should trust their own eyes. If you like the picture, you like it, and if you don't, you don't. Appealing to the critics as a source of objective artistic judgment is naive, and I say that as someone who has published some art criticism myself.
Art competitions can be insanely pretentious, throwing some vaguely pretty shit can win over the judges sometimes. My town hosted a art show competition thing where they had a judged award and a voted award. The judges granted it to some dude who made a purposefully pretentious painting to see if he could get the award, the general vote award went to a hand forged bronze axe based off of Minoan double headed axes.
This is stupid and I hope he gets his butt handed to him, but:
A federal judge agreed with the Office and contrasted AI images to photography, which also uses a processor to capture images, but it is the human that decides on the elements of the picture, unlike AI imagery where the computer decides on the picture elements.
Journey outside the world of API models (like Midjourney) and you can use imagegen tools where " the human that decides on the elements of the picture"
It can be anything from area prompting (kinda drawing bounding boxes where you want things to go) to controlnet/ipadapter models using some other image as reference, to the "creator" making a sketch and the AI "coloring it in" or fleshing it out, to an artist making a worthy standalone painting and letting the AI "touch it up" or change the style (for instance, to turn a digital painting or a pencil sketch to something resembling a physical painting, watercolor, whatever).
The later is already done in photoshop (just not as well) and is generally not placed into the AI bin.
In other words, this argument isn't going to hold up, as the line is very blurry. Legislators and courts are going to have to come up with something more solid.
The rule is already human expression in fixed form, of creative height. So you have to demonstrate that you the human made notable contributions to the final output.
I'm sure that an argument can be made that the final output can't be generated without the human-created prompt. Generative AI doesn't output images on its own without a seed/prompt, much like a canvas doesn't paint itself and a camera doesn't open the shutter on its own.
I like it, it's more interesting to me than most of the boring "original" paintings people try to sell at art shows and online, and almost all of the stuff I've seen on people's walls in their homes. Not another triptych with 4 circles and a triangle, or a lone tree on a grassy hill, or a bowl of fruit and a wine bottle.
That could be said of much art from cave paintings to modern art, but the important part is that art is subjective. The main issue I have with the people complaining about AI generated art is, they only seem upset about it after they find out it's AI generated. If you really have the ability to see the difference, maybe you should be judging these contests. The judges had absolutely no idea until it was pointed out to them. If that bothers people, they shouldn't place any value in that competition.
People enjoy paintings with modern pigments and canvases and synthetic brushes as art, autotuned music (and other post-recording fixes) as art, photographs that use filters and image/color/artifact-correcting software as art, and I see no difference in prompt-tuned AI-generated art. It's a technology that makes it easier for the artist to arrive at their desired result, and it has the ability to inspire emotions and thoughts in the viewers, in the same way.
I'm guessing there is art you enjoy that I might not, but I am happy you have that available to you. It's funny to me that people are so strongly against something so innocuous. In that it inspires such strong emotions, it's arguably more artistic than the hand-painted submissions the judges found lacking.
Most AI art haters only hate it after they've learned it's made by AI. In reality it's next to impossible to tell a well made AI art from human made digital art for example. Ofcourse everyone claims they can immediately tell the difference but even they know they're kidding themselves. It's gatekeeping, pure and simple.
There's plenty of really good AI art and generating it is not as simple as they often make it to be.
Exactly. People already enjoy AI-assisted art in many other forms and they don't even realize it. When they find out, will they stop enjoying it? They don't seem to have stopped enjoying autotuned or computer-generated music, or CGI movies, or practically every artistic photograph made in the past 30 years. It's an arbitrary line in the sand.
Gatekeeping? Nah, it's not as it's quite easy for AI Bros to pick up a pencil. Nobody, except disabilities, stops them.
And yeah AI slop has become so well that rabid people are accusing actual artists that their art was made by AI. But why is that? Certainly not because their previous art was trained on...
Fuck AI. It is used to replace actual humans and human creativity.