I fucked with the title a bit. What i linked to was actually a mastodon post linking to an actual thing. but in my defense, i found it because cory doctorow boosted it, so, in a way, i am providing the original source here.
please argue. please do not remove.
I think we should have a rule that says if a LLM company invokes fair use on the training inputs then the outputs are public domain.
That’s already been ruled on once.
A recent lawsuit challenged the human-authorship requirement in the context of works purportedly “authored” by AI. In June 2022, Stephen Thaler sued the Copyright Office for denying his application to register a visual artwork that he claims was authored “autonomously” by an AI program called the Creativity Machine. Dr. Thaler argued that human authorship is not required by the Copyright Act. On August 18, 2023, a federal district court granted summary judgment in favor of the Copyright Office. The court held that “human authorship is an essential part of a valid copyright claim,” reasoning that only human authors need copyright as an incentive to create works. Dr. Thaler has stated that he plans to appeal the decision.
Why would companies care about copyright of the output? The value is in the tool to create it. The whole issue to me revolves around the AI company profiting on it’s service. A service built on a massive library of copyrighted works. It seems clear to me, a large portion of their revenue should go equally to the owners of the works in their database.
You can still copyright AI works, you just can’t name an AI as the author.
That’s just saying you can claim copyright if you lie about authorship. The problem then is, you may step into the realm of fraud.
You don’t have to lie about authorship. You should read the guidance.
What constitutes fair use?
17 U.S.C. § 107
Notwithstanding the provisions of sections 17 U.S.C. § 106 and 17 U.S.C. § 106A, the fair use of a copyrighted work, including such use by reproduction in copies or phonorecords or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright.
GenAI training, at least regarding art, is neither criticism, comment, news reporting scholarship, nor research.
AI training is not done by scientists but engineers of a corporative entity with a long term profit goal.
So, by elimination, we can conclude that none of the purposes covered by the fair use doctrine apply to Generative AI training.
Q.E.D.
it is pretty obviously scholarship and research
It is pretty obviously Research and Development of a commercial product in many cases. Not fair use.
there is no stipulation that the research must be non-profit.
Google scanned millions of books and made them available online. Courts ruled that was fair use because the purpose and interface didn’t lend itself to actually reading the books in Google books, but just searching them for information. If that is fair use, then I don’t see how training an LLM (which doesn’t retain the exact copy of the training data at least in the vast majority of cases) isn’t fair use. You aren’t going to get an argument from me.
I think most people who will disagree are reflexively anti AI, and that’s fine. But I just haven’t heard a good argument that AI training isn’t fair use.
here’s a sidechannel attack on your position: every use, even infringing uses, are fair use until adjudicated, because what fair use means is that a court has agreed that your infringing use is allowed. so of course ai training (broadly) is always fair use. but particular instances of ai training may be found to not be fair use, and so we can’t be sure that you are always going to be right (for the specific ai models that may come into question legally).
“Its perfectly legal unless you get caught!”
Considering most copyright cases come down to the individual judge’s decision, essentially yes