Why ? Basically he simply stated that you can use whatever material you want to train your model as long as you ask the permission to use it (and presumably pay for it) to the author (or copytight holder)
“Fair use” is the exact opposite of what you’re saying here. It says that you don’t need to ask for any permission. The judge ruled that obtaining illegitimate copies was unlawful but use without the creators consent is perfectly fine.
If I understand correctly they are ruling you can by a book once, and redistribute the information to as many people you want without consequences. Aka 1 student should be able to buy a textbook and redistribute it to all other students for free. (Yet the rules only work for companies apparently, as the students would still be committing a crime)
They may be trying to put safeguards so it isn’t directly happening, but here is an example that the text is there word for word:
That’s not at all what this ruling says, or what LLMs do.
Copyright covers a specific concrete expression. It doesn’t cover the information that the expression conveys. So if I paint a portrait of myself, that portrait is covered by copyright. If someone looks at the portrait and says “this is a portrait of a tall, dark, handsome deer-creature of some sort with awesome antlers” they haven’t violated that copyright even if they’re accurately conveying the same information that the portrait is conveying.
The ruling does cover the assumption that the LLM “contains” the training text, which was asserted by the Authors and was not contested by Anthropic. The judge ruled that even if this assertion is true it doesn’t matter. The LLM is sufficiently transformative to count as a new work.
If you have an LLM reproduce a copyrighted text, the text is still copyrighted. That doesn’t change. Just like if a human re-wrote it word-for-word from memory.
If I understand correctly they are ruling you can by a book once, and redistribute the information to as many people you want without consequences. Aka 1 student should be able to buy a textbook and redistribute it to all other students for free. (Yet the rules only work for companies apparently, as the students would still be committing a crime)
Well, it would be interesting if this case would be used as precedence in a case invonving a single student that do the same thing. But you are right
I suppose someone could develop an LLM that digests textbooks, and rewords the text and spits it back out. Then distribute it for free page for page. You can’t copy right the math problems I don’t think… so if the text wording is what gives it credence, that would have been changed.
Oh I agree it should be, but following the judges ruling, I don’t see how it could be. You trained an LLM on textbooks that were purchased, not pirated. And the LLM distributed the responses.
(Unless you mean the human reworded them, then yeah, we aren’t special apparently)
Yes, on the second part. Just rearranging or replacing words in a text is not transformative, which is a requirement. There is an argument that the ‘AI’ are capable of doing transformative work, but the tokenizing and weight process is not magic and in my use of multiple LLM’s they do not have an understanding of the material any more then a dictionary understands the material printed on its pages.
An example was the wine glass problem. Art ‘AI’s were unable to display a wine glass filled to the top. No matter how it was prompted, or what style it aped, it would fail to do so and report back that the glass was full. But it could render a full glass of water. It didn’t understand what a full glass was, not even for the water. How was this possible? Well there was very little art of a full wine glass, because society has an unspoken rule that a full wine glass is the epitome of gluttony, and it is to be savored not drunk. Where as the reference of full glasses of water were abundant. It doesn’t know what full means, just that pictures of full glass of water are tied to phrases full, glass, and water.
It can, the only thing stopping it is if it is specifically told not to, and this consideration is successfully checked for. It is completely capable of plagiarizing otherwise.
For the purposes of this ruling it doesn’t actually matter. The Authors claimed that this was the case and the judge said “sure, for purposes of argument I’ll assume that this is indeed the case.” It didn’t change the outcome.
It made the ruling stronger, not weaker. The judge was accepting the most extreme claims that the Authors were making and still finding no copyright violation from training. Pushing back those claims won’t help their case, it’s already as strong as it’s ever going to get.
As far as the judge was concerned, it didn’t matter whether the AI did or did not “memorize” its training data. He said it didn’t violate copyright either way.
What a bad judge.
This is another indication of how Copyright laws are bad. The whole premise of copyright has been obsolete since the proliferation of the internet.
Why ? Basically he simply stated that you can use whatever material you want to train your model as long as you ask the permission to use it (and presumably pay for it) to the author (or copytight holder)
“Fair use” is the exact opposite of what you’re saying here. It says that you don’t need to ask for any permission. The judge ruled that obtaining illegitimate copies was unlawful but use without the creators consent is perfectly fine.
If I understand correctly they are ruling you can by a book once, and redistribute the information to as many people you want without consequences. Aka 1 student should be able to buy a textbook and redistribute it to all other students for free. (Yet the rules only work for companies apparently, as the students would still be committing a crime)
They may be trying to put safeguards so it isn’t directly happening, but here is an example that the text is there word for word:
That’s not at all what this ruling says, or what LLMs do.
Copyright covers a specific concrete expression. It doesn’t cover the information that the expression conveys. So if I paint a portrait of myself, that portrait is covered by copyright. If someone looks at the portrait and says “this is a portrait of a tall, dark, handsome deer-creature of some sort with awesome antlers” they haven’t violated that copyright even if they’re accurately conveying the same information that the portrait is conveying.
The ruling does cover the assumption that the LLM “contains” the training text, which was asserted by the Authors and was not contested by Anthropic. The judge ruled that even if this assertion is true it doesn’t matter. The LLM is sufficiently transformative to count as a new work.
If you have an LLM reproduce a copyrighted text, the text is still copyrighted. That doesn’t change. Just like if a human re-wrote it word-for-word from memory.
Well, it would be interesting if this case would be used as precedence in a case invonving a single student that do the same thing. But you are right
This was my understanding also, and why I think the judge is bad at their job.
I suppose someone could develop an LLM that digests textbooks, and rewords the text and spits it back out. Then distribute it for free page for page. You can’t copy right the math problems I don’t think… so if the text wording is what gives it credence, that would have been changed.
If a human did that it’s still plagiarism.
Oh I agree it should be, but following the judges ruling, I don’t see how it could be. You trained an LLM on textbooks that were purchased, not pirated. And the LLM distributed the responses.
(Unless you mean the human reworded them, then yeah, we aren’t special apparently)
Yes, on the second part. Just rearranging or replacing words in a text is not transformative, which is a requirement. There is an argument that the ‘AI’ are capable of doing transformative work, but the tokenizing and weight process is not magic and in my use of multiple LLM’s they do not have an understanding of the material any more then a dictionary understands the material printed on its pages.
An example was the wine glass problem. Art ‘AI’s were unable to display a wine glass filled to the top. No matter how it was prompted, or what style it aped, it would fail to do so and report back that the glass was full. But it could render a full glass of water. It didn’t understand what a full glass was, not even for the water. How was this possible? Well there was very little art of a full wine glass, because society has an unspoken rule that a full wine glass is the epitome of gluttony, and it is to be savored not drunk. Where as the reference of full glasses of water were abundant. It doesn’t know what full means, just that pictures of full glass of water are tied to phrases full, glass, and water.
Not at all true. AI doesn’t just reproduce content it was trained on on demand.
It can, the only thing stopping it is if it is specifically told not to, and this consideration is successfully checked for. It is completely capable of plagiarizing otherwise.
For the purposes of this ruling it doesn’t actually matter. The Authors claimed that this was the case and the judge said “sure, for purposes of argument I’ll assume that this is indeed the case.” It didn’t change the outcome.
I mean, they can assume fantasy, and it will hold weight because laws are interpreted by the court, not because the court is correct.
It made the ruling stronger, not weaker. The judge was accepting the most extreme claims that the Authors were making and still finding no copyright violation from training. Pushing back those claims won’t help their case, it’s already as strong as it’s ever going to get.
As far as the judge was concerned, it didn’t matter whether the AI did or did not “memorize” its training data. He said it didn’t violate copyright either way.
Huh? Didn’t Meta not use any permission, and pirated a lot of books to train their model?
True. And I will be happy if someone sue them and the judge say the same thing.