Where does the training data come from seems like the main issue, rather than the training itself. Copying has to take place somewhere for that data to exist. I’m no fan of the current IP regime but it seems like an obvious problem if you get caught making money with terabytes of content you don’t have a license for.
the slippery slope here is that you as an artist hear music on the radio, in movies and TV, commercials. All this hearing music is training your brain. If an AI company just plugged in an FM radio and learned from that music I’m sure that a lawsuit could start to make it that no one could listen to anyone’s music without being tainted.
That feels categorically different unless AI has legal standing as a person. We’re talking about training LLMs, there’s not anything more than people using computers going on here.
So then anyone who uses a computer to make music would be in violation?
Or is it some amount of computer generated content? How many notes? If its not a sample of a song, how does one know how much of those notes are attributed to which artist being stolen from?
What if I have someone else listen to a song and they generate a few bars of a song for me? Is it different that a computer listened and then generated output?
To me it sounds like artists were open to some types of violations but not others. If an AI model listened to the radio most of these issues go away unless we are saying that humans who listen to music and write similar songs are OK but people who write music using computers who calculate the statistically most common song are breaking the law.
Potentially yes, if you use existing IP to make music, doing it with a computer isn’t going to change anything about how the law works. It does get super complicated and there’s ambiguity depending on the specifics, but mostly if you do it a not obvious way and no one knows how you did it you’re going to be fine, anything other than that you will potentially get sued, even if whatever you did was a legally permissible use of the IP. Rightsholders generally hate when anyone who isn’t them tries to make money off their IP regardless of how they try to do it or whether they have a right to do it unless they paid for a license.
That sounds like a setup to only go after those you can make money from and not actually protecting IP.
By definition if your song is a hit it is heard by everyone. How do we show my new song is a direct consequence of hearing X song while your new song isn’t due to you hearing X song?
I can see an easy lawsuit by putting out a song and then claiming that anyone who heard it “learned” how to play their new album this way. The fact AI can output something that sounds different than any individual song it learned from means we can claim nearly all works derivative.
A lot of the griping about AI training involves data that’s been freely published. Stable Diffusion, for example, trained on public images available on the internet for anyone to view, but led to all manner of ill-informed public outrage. LLMs train on public forums and news sites. But people have this notion that copyright gives them some kind of absolute control over the stuff they “own” and they suddenly see a way to demand a pound of flesh for what they previously posted in public. It’s just not so.
I have the right to analyze what I see. I strongly oppose any move to restrict that right.
The problem with those things is that the viewer doesn’t need that license in order to analyze them. They can just refuse the license. Licenses don’t automatically apply, you have to accept them. And since they’re contracts they need to offer consideration, not just place restrictions.
An AI model is not a derivative work, it doesn’t include any identifiable pieces of the training data.
It’s also pretty clear they used a lot of books and other material they didn’t pay for, and obtained via illegal downloads. The practice of which I’m fine with, I just want it legalised for everyone.
I’m wondering when i go to the library and read a book, does this mean i can never become an author as I’m tainted? Or am I only tainted if I stole the book?
That’s the whole problem with AI and artists complaining about theft. You can’t draw a meaningful distinction between what people do and what the ai is doing.
Where does the training data come from seems like the main issue, rather than the training itself. Copying has to take place somewhere for that data to exist. I’m no fan of the current IP regime but it seems like an obvious problem if you get caught making money with terabytes of content you don’t have a license for.
the slippery slope here is that you as an artist hear music on the radio, in movies and TV, commercials. All this hearing music is training your brain. If an AI company just plugged in an FM radio and learned from that music I’m sure that a lawsuit could start to make it that no one could listen to anyone’s music without being tainted.
That feels categorically different unless AI has legal standing as a person. We’re talking about training LLMs, there’s not anything more than people using computers going on here.
So then anyone who uses a computer to make music would be in violation?
Or is it some amount of computer generated content? How many notes? If its not a sample of a song, how does one know how much of those notes are attributed to which artist being stolen from?
What if I have someone else listen to a song and they generate a few bars of a song for me? Is it different that a computer listened and then generated output?
To me it sounds like artists were open to some types of violations but not others. If an AI model listened to the radio most of these issues go away unless we are saying that humans who listen to music and write similar songs are OK but people who write music using computers who calculate the statistically most common song are breaking the law.
Potentially yes, if you use existing IP to make music, doing it with a computer isn’t going to change anything about how the law works. It does get super complicated and there’s ambiguity depending on the specifics, but mostly if you do it a not obvious way and no one knows how you did it you’re going to be fine, anything other than that you will potentially get sued, even if whatever you did was a legally permissible use of the IP. Rightsholders generally hate when anyone who isn’t them tries to make money off their IP regardless of how they try to do it or whether they have a right to do it unless they paid for a license.
That sounds like a setup to only go after those you can make money from and not actually protecting IP.
By definition if your song is a hit it is heard by everyone. How do we show my new song is a direct consequence of hearing X song while your new song isn’t due to you hearing X song?
I can see an easy lawsuit by putting out a song and then claiming that anyone who heard it “learned” how to play their new album this way. The fact AI can output something that sounds different than any individual song it learned from means we can claim nearly all works derivative.
A lot of the griping about AI training involves data that’s been freely published. Stable Diffusion, for example, trained on public images available on the internet for anyone to view, but led to all manner of ill-informed public outrage. LLMs train on public forums and news sites. But people have this notion that copyright gives them some kind of absolute control over the stuff they “own” and they suddenly see a way to demand a pound of flesh for what they previously posted in public. It’s just not so.
I have the right to analyze what I see. I strongly oppose any move to restrict that right.
Publically available =/= freely published
Many images are made and published with anti AI licenses or are otherwise licensed in a way that requires attribution for derivative works.
The problem with those things is that the viewer doesn’t need that license in order to analyze them. They can just refuse the license. Licenses don’t automatically apply, you have to accept them. And since they’re contracts they need to offer consideration, not just place restrictions.
An AI model is not a derivative work, it doesn’t include any identifiable pieces of the training data.
It does. For example, Harry Potter books can be easily identified.
It’s also pretty clear they used a lot of books and other material they didn’t pay for, and obtained via illegal downloads. The practice of which I’m fine with, I just want it legalised for everyone.
I’m wondering when i go to the library and read a book, does this mean i can never become an author as I’m tainted? Or am I only tainted if I stole the book?
To me this is only a theft case.
That’s the whole problem with AI and artists complaining about theft. You can’t draw a meaningful distinction between what people do and what the ai is doing.
And what of the massive amount of content paywalled that ai still used to train?
If it’s paywalled how did they access it?
By piracy.
https://arstechnica.com/tech-policy/2025/02/meta-defends-its-vast-book-torrenting-were-just-a-leech-no-proof-of-seeding/
You are dull. Very dull. There is no shortage of ways to pirate content on the internet, including torrents. And they wasted no time doing so