Those claiming AI training on copyrighted works is “theft” misunderstand key aspects of copyright law and AI technology. Copyright protects specific expressions of ideas, not the ideas themselves. When AI systems ingest copyrighted works, they’re extracting general patterns and concepts - the “Bob Dylan-ness” or “Hemingway-ness” - not copying specific text or images.

This process is akin to how humans learn by reading widely and absorbing styles and techniques, rather than memorizing and reproducing exact passages. The AI discards the original text, keeping only abstract representations in “vector space”. When generating new content, the AI isn’t recreating copyrighted works, but producing new expressions inspired by the concepts it’s learned.

This is fundamentally different from copying a book or song. It’s more like the long-standing artistic tradition of being influenced by others’ work. The law has always recognized that ideas themselves can’t be owned - only particular expressions of them.

Moreover, there’s precedent for this kind of use being considered “transformative” and thus fair use. The Google Books project, which scanned millions of books to create a searchable index, was ruled legal despite protests from authors and publishers. AI training is arguably even more transformative.

While it’s understandable that creators feel uneasy about this new technology, labeling it “theft” is both legally and technically inaccurate. We may need new ways to support and compensate creators in the AI age, but that doesn’t make the current use of copyrighted works for AI training illegal or unethical.

For those interested, this argument is nicely laid out by Damien Riehl in FLOSS Weekly episode 744. https://twit.tv/shows/floss-weekly/episodes/744

  • lettruthout@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    2
    ·
    8 months ago

    If they can base their business on stealing, then we can steal their AI services, right?

    • LibertyLizard@slrpnk.net
      link
      fedilink
      English
      arrow-up
      4
      ·
      8 months ago

      Pirating isn’t stealing but yes the collective works of humanity should belong to humanity, not some slimy cabal of venture capitalists.

      • WaxedWookie@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 months ago

        Unlike regular piracy, accessing “their” product hosted on their servers using their power and compute is pretty clearly theft. Morally correct theft that I wholeheartedly support, but theft nonetheless.

        • LibertyLizard@slrpnk.net
          link
          fedilink
          English
          arrow-up
          0
          ·
          8 months ago

          Is that how this technology works? I’m not the most knowledgeable about tech stuff honestly (at least by Lemmy standards).

          • WaxedWookie@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            8 months ago

            There’s self-hosted LLMs, (e.g. Ollama), but for the purposes of this conversation, yeah - they’re centrally hosted, compute intensive software services.

      • General_Effort@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 months ago

        Yes, that’s exactly the point. It should belong to humanity, which means that anyone can use it to improve themselves. Or to create something nice for themselves or others. That’s exactly what AI companies are doing. And because it is not stealing, it is all still there for anyone else. Unless, of course, the copyrightists get there way.

        • ProstheticBrain@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          8 months ago

          ingredients to a recipe may well be subject to copyright, which is why food writers make sure their recipes are “unique” in some small way. Enough to make them different enough to avoid accusations of direct plagiarism.

          E: removed unnecessary snark

          • oxomoxo@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            8 months ago

            I think there is some confusion here between copyright and patent, similar in concept but legally exclusive. A person can copyright the order and selection of words used to express a recipe, but the recipe itself is not copy, it can however fall under patent law if proven to be unique enough, which is difficult to prove.

            So you can technically own the patent to a recipe keeping other companies from selling the product of a recipe, however anyone can make the recipe themselves, if you can acquire it and not resell it. However that recipe can be expressed in many different ways, each having their own copyright.

  • TommySoda@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    3
    ·
    edit-2
    8 months ago

    Here’s an experiment for you to try at home. Ask an AI model a question, copy a sentence or two of what they give back, and paste it into a search engine. The results may surprise you.

    And stop comparing AI to humans but then giving AI models more freedom. If I wrote a paper I’d need to cite my sources. Where the fuck are your sources ChatGPT? Oh right, we’re not allowed to see that but you can take whatever you want from us. Sounds fair.

    • PeterisBacon@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      Did the experiment.

      Zero shock factor. It showed an empty google search result. I have screenshots for the deniers. I don’t know what you think will happen, but unless you’re asking it some super vague question, where the answer would be unanimous across the board, it’s not going to spit out some shock factor quote that you can google. What a waste of an ‘experiment’.

      • TommySoda@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        2 months ago

        Bro this was 6 months ago lol. Models have gotten way better since then. I made this comment when Google was still telling people to put glue on pizza. Which, if you did re-input the answer, would take you to a reddit post. Almost all of them would take you to a reddit post back then.

    • PixelProf@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 months ago

      Not to fully argue against your point, but I do want to push back on the citations bit. Given the way an LLM is trained, it’s not really close to equivalent to me citing papers researched for a paper. That would be more akin to asking me to cite every piece of written or verbal media I’ve ever encountered as they all contributed in some small way to way that the words were formulated here.

      Now, if specific data were injected into the prompt, or maybe if it was fine-tuned on a small subset of highly specific data, I would agree those should be cited as they are being accessed more verbatim. The whole “magic” of LLMs was that it needed to cross a threshold of data, combined with the attentional mechanism, and then the network was pretty suddenly able to maintain coherent sentences structure. It was only with loads of varied data from many different sources that this really emerged.

    • HalfSalesman@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      Microsoft’s Copilot funnily enough actually provides sources that it pulls from the internet if you ask it to.

    • fmstrat@lemmy.nowsci.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      This is the catch with OPs entire statement about transformation. Their premise is flawed, because the next most likely token is usually the same word the author of a work chose.

      • TommySoda@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        8 months ago

        And that’s kinda my point. I understand that transformation is totally fine but these LLM literally copy and paste shit. And that’s still if you are comparing AI to people which I think is completely ridiculous. If anything these things are just more complicated search engines with half the usefulness. If I search online about how to change a tire I can find some reliable sources to do so. If I ask AI how to change a tire it would just spit something out that might not even be accurate and I’d have to search again afterwards just to make sure what it told me was even accurate.

        It’s just a word calculator based on information stolen from people without their consent. It has no original thought process so it has no way to transform anything. All it can do is copy and paste in different combinations.

    • azuth@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      8 months ago

      It’s not a breach of copyright or other IP law not to cite sources on your paper.

      Getting your paper rejected for lacking sources is also not infringing in your freedom. Being forced to pay damages and delete your paper from any public space would be infringement of your freedom.

      • explore_broaden@midwest.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 months ago

        I’m pretty sure that it’s true that citing sources isn’t really relevant to copyright violation, either you are violating or not. Saying where you copied from doesn’t change anything, but if you are using some ideas with your own analysis and words it isn’t a violation either way.

        • Eatspancakes84@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 months ago

          With music this often ends up in civil court. Pretty sure the same can in theory happen for written texts, but the commercial value of most written texts is not worth the cost of litigation.

      • TommySoda@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        8 months ago

        I mean, you’re not necessarily wrong. But that doesn’t change the fact that it’s still stealing, which was my point. Just because laws haven’t caught up to it yet doesn’t make it any less of a shitty thing to do.

        • azuth@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          8 months ago

          It’s not stealing, its not even ‘piracy’ which also is not stealing.

          Copyright laws need to be scaled back, to not criminalize socially accepted behavior, not expand.

        • ContrarianTrail@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          8 months ago

          The original source material is still there. They just made a copy of it. If you think that’s stealing then online piracy is stealing as well.

          • TommySoda@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            8 months ago

            Well they make a profit off of it, so yes. I have nothing against piracy, but if you’re reselling it that’s a different story.

            • ContrarianTrail@lemm.ee
              link
              fedilink
              English
              arrow-up
              2
              ·
              8 months ago

              But piracy saves you money which is effectively the same as making a profit. Also, it’s not just that they’re selling other people’s work for profit. You’re also paying for the insane amount of computing power it takes to train and run the AI plus salaries of the workers etc.

        • Octopus1348@lemy.lol
          link
          fedilink
          English
          arrow-up
          0
          ·
          8 months ago

          When I analyze a melody I play on a piano, I see that it reflects the music I heard that day or sometimes, even music I heard and liked years ago.

          Having parts similar or a part that is (coincidentally) identical to a part from another song is not stealing and does not infringe upon any law.

          • takeda@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            8 months ago

            You guys are missing a fundamental point. The copyright was created to protect an author for specific amount of time so somebody else doesn’t profit from their work essentially stealing their deserved revenue.

            LLM AI was created to do exactly that.

  • EldritchFeminity@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    8 months ago

    The argument that these models learn in a way that’s similar to how humans do is absolutely false, and the idea that they discard their training data and produce new content is demonstrably incorrect. These models can and do regurgitate their training data, including copyrighted characters.

    And these things don’t learn styles, techniques, or concepts. They effectively learn statistical averages and patterns and collage them together. I’ve gotten to the point where I can guess what model of image generator was used based on the same repeated mistakes that they make every time. Take a look at any generated image, and you won’t be able to identify where a light source is because the shadows come from all different directions. These things don’t understand the concept of a shadow or lighting, they just know that statistically lighter pixels are followed by darker pixels of the same hue and that some places have collections of lighter pixels. I recently heard about an ai that scientists had trained to identify pictures of wolves that was working with incredible accuracy. When they went in to figure out how it was identifying wolves from dogs like huskies so well, they found that it wasn’t even looking at the wolves at all. 100% of the images of wolves in its training data had snowy backgrounds, so it was simply searching for concentrations of white pixels (and therefore snow) in the image to determine whether or not a picture was of wolves or not.

    • ricecake@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      8 months ago

      Basing your argument around how the model or training system works doesn’t seem like the best way to frame your point to me. It invites a lot of mucking about in the details of how the systems do or don’t work, how humans learn, and what “learning” and “knowledge” actually are.

      I’m a human as far as I know, and it’s trivial for me to regurgitate my training data. I regularly say things that are either directly references to things I’ve heard, or accidentally copy them, sometimes with errors.
      Would you argue that I’m just a statistical collage of the things I’ve experienced, seen or read? My brain has as many copies of my training data in it as the AI model, namely zero, but “Captain Picard of the USS Enterprise sat down for a rousing game of chess with his friend Sherlock Holmes, and then Shakespeare came in dressed like Mickey mouse and said ‘to be or not to be, that is the question, for tis nobler in the heart’ or something”. Direct copies of someone else’s work, as well as multiple copyright infringements.
      I’m also shit at drawing with perspective. It comes across like a drunk toddler trying their hand at cubism.

      Arguing about how the model works or the deficiencies of it to justify treating it differently just invites fixing those issues and repeating the same conversation later. What if we make one that does work how humans do in your opinion? Or it properly actually extracts the information in a way that isn’t just statistically inferred patterns, whatever the distinction there is? Does that suddenly make it different?

      You don’t need to get bogged down in the muck of the technical to say that even if you conceed every technical point, we can still say that a non-sentient machine learning system can be held to different standards with regards to copyright law than a sentient person. A person gets to buy a book, read it, and then carry around that information in their head and use it however they want. Not-A-Person does not get to read a book and hold that information without consent of the author.
      Arguing why it’s bad for society for machines to mechanise the production of works inspired by others is more to the point.

      Computers think the same way boats swim. Arguing about the difference between hands and propellers misses the point that you don’t want a shrimp boat in your swimming pool. I don’t care why they’re different, or that it technically did or didn’t violate the “free swim” policy, I care that it ruins the whole thing for the people it exists for in the first place.

      I think all the AI stuff is cool, fun and interesting. I also think that letting it train on everything regardless of the creators wishes has too much opportunity to make everything garbage. Same for letting it produce content that isn’t labeled or cited.
      If they can find a way to do and use the cool stuff without making things worse, they should focus on that.

      • petrol_sniff_king@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        8 months ago

        Arguing why it’s bad for society for machines to mechanise the production of works inspired by others is more to the point.

        I agree, but the fact that shills for this technology are also wrong about it is at least interesting.

        Rhetorically speaking, I don’t know if that’s useless.

        I don’t care why they’re different, or that it technically did or didn’t violate the “free swim” policy,

        I do like this point a lot.

        If they can find a way to do and use the cool stuff without making things worse, they should focus on that.

        I do miss when the likes of cleverbot was just a fun novelty on the Internet.

    • Eatspancakes84@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      I am also not really getting the argument. If I as a human want to learn a subject from a book I buy it ( or I go to a library who paid for it). If it’s similar to how humans learn, it should cost equally much.

      The issue is of course that it’s not at all similar to how humans learn. It needs VASTLY more data to produce something even remotely sensible. Develop AI that’s truly transformative, by making it as efficient as humans are in learning, and the cost of paying for copyright will be negligible.

      • stephen01king@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        If I as a human want to learn a subject from a book I buy it ( or I go to a library who paid for it). If it’s similar to how humans learn, it should cost equally much.

        You’re on Lemmy where people casually says “piracy is morally the right thing to do”, so I’m not sure this argument works on this platform.

        • Eatspancakes84@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          8 months ago

          I know my way around the Jolly Roger myself. At the same time using copyrighted materials in a commercial setting (as OpenAI does) shouldn’t be free.

          • stephen01king@lemmy.zip
            link
            fedilink
            English
            arrow-up
            0
            ·
            8 months ago

            Only if they are selling the output. I see it as more they are selling access to the service on a server farm, since running ChatGPT is not cheap.

            • Hamartia@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              8 months ago

              The usual cycle of tech-bro capitalism would put them currently on the early acquire market saturation stage. So it’s unlikely that they are currently charging what they will when they are established and have displaced lots of necessary occupations.

    • Dran@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      Devil’s Advocate:

      How do we know that our brains don’t work the same way?

      Why would it matter that we learn differently than a program learns?

      Suppose someone has a photographic memory, should it be illegal for them to consume copyrighted works?

      • EldritchFeminity@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 months ago

        Because we’re talking pattern recognition levels of learning. At best, they’re the equivalent of parrots mimicking human speech. They take inputs and output data based on the statistical averages from their training sets - collaging pieces of their training into what they think is the right answer. And I use the word think here loosely, as this is the exact same process that the Gaussian blur tool in Photoshop uses.

        This matters in the context of the fact that these companies are trying to profit off of the output of these programs. If somebody with an eidetic memory is trying to sell pieces of works that they’ve consumed as their own - or even somebody copy-pasting bits from Clif Notes - then they should get in trouble; the same as these companies.

        Given A and B, we can understand C. But an LLM will only be able to give you AB, A(b), and B(a). And they’ve even been just spitting out A and B wholesale, proving that they retain their training data and will regurgitate the entirety of copyrighted material.

    • Riccosuave@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      4
      ·
      8 months ago

      Even if they learned exactly like humans do, like so fucking what, right!? Humans have to pay EXORBITANT fees for higher education in this country. Arguing that your bot gets socialized education before the people do is fucking absurd.

      • v_krishna@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        ·
        8 months ago

        That seems more like an argument for free higher education rather than restricting what corpuses a deep learning model can train on

  • lightnsfw@reddthat.com
    link
    fedilink
    English
    arrow-up
    3
    ·
    8 months ago

    If ChatGPT was free I might see their point but it’s not so no. If you’re making money from someone’s work you should pay them.

    • Drewelite@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      8 months ago

      You’re making an indie movie on your iPhone with friends. You sell one ticket. You now owe: Apple, Joseph Nicéphore Niépce’s estate (inventor of the camera), every cinematographer who first devised the type of shots you’re using, the writers since the beginning of time that created the types of story elements in the script, the mathematicians and scientists that developed lense technology, the car manufacturers that aided your ability to transport you to the set, the guy who’s YouTube tutorial you watched to figure out lighting, etc, etc, etc.

      Your black and white framing appears to provide a clear ethical framework until you dig a millimeter into it. The reality is that society only exists because of the work that all of the individuals within it produce. Things like copyright are an adapter to our capitalistic economy to ensure people’s work that can be copied, are protected enough that they have the opportunity to make money off of it. It exists so somebody else can’t immediately turn around and sell the same book someone else wrote, or just change a few words and do as such. This protection was meant to last 15 to 20 years. Then enter the public domain for anyone to copy and rewrite as they please.

      Current copyright is an utter bastardization of its intended use. Massive corporations are trying to act like they’re fighting for the little guy to own their IP forever. But they buy up all that IP for pennies compared to how they turn around and commoditize it. Then they own all of what society produces in perpetuity. They can sit on their dragon hoards and laugh as they gobble up any new creation that strays too close. And people wonder why everything is a sequel of a sequel of a sequel owned by massive corporations.

      • lightnsfw@reddthat.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 months ago

        I was trying to keep it simple.

        I would have paid them by purchasing the iphone and whatever software I used. I paid for the car that transported me. I would have paid for my education. People can also give their work away for free if they want, or be compensated by ads as in the case of Youtube or FOSS.

        Current copyright is an utter bastardization of its intended use. Massive corporations are trying to act like they’re fighting for the little guy to own their IP forever. But they buy up all that IP for pennies compared to how they turn around and commoditize it. Then they own all of what society produces in perpetuity. They can sit on their dragon hoards and laugh as they gobble up any new creation that strays too close. And people wonder why everything is a sequel of a sequel of a sequel owned by massive corporations.

        What do you think ChatGPT is trying to do? It’s already being used to churn out shitloads of garbage content. They’re not making things better.

        • Drewelite@lemmynsfw.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          8 months ago

          By that rationalization, OpenAI is paying their Internet bill, and for a copy of Dune, so they’re free to use any content they acquired to make their product better. Your original argument wasn’t akin to, “Shouldn’t someone using an iPhone pay for one?” It was “Shouldn’t Apple get a cut of everything made with the iPhone?”

          You could make the argument that people use ChatGPT to churn out garbage content, sure, but a lot of cinephiles would accuse your proverbial indie movie of being the same and blame Apple for creating the iPhone and enabling it. If you want to make that argument, go ahead. But don’t pretend it has anything to do with people getting paid fairly for what they made.

          ChatGPT is enabling people to make more things, easier, to get paid. And people, as always, are relying on everything that was created before them as a basis for their work. Same as when I go to school and the professor shows me lots of different works to learn from. The thousands of students in that class didn’t pay for any of that stuff. The professor distilled it and presented it and I paid him to do it.

          • lightnsfw@reddthat.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            8 months ago

            The problem is that they didn’t pay for the content they’ve acquired and they’re selling it to others. The creators are not being compensated and may not want to participate in AI development at all. If the creators agree to it then fine but most do not. Just look at what’s happening with art. People are scraping all of an artists work to create AI pictures in their style and impersonate them. That’s not okay.

  • MeaanBeaan@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    8 months ago

    This process is akin to how humans learn by reading widely and absorbing styles and techniques, rather than memorizing and reproducing exact passages.

    Machine learning algorithms are not people and are not ingesting these works the same way a person does. This argument is brought up all the time and just doesn’t ring true. You’re defending the unethical use of copyrighted works by a giant corporation with a metaphor that doesn’t have any bearing on reality; in an age where artists are already shamefully undervalued. Creating art is a human process with the express intent of it being enjoyed by other humans. Having an algorithm do it is removing the most important part of art; the humanity.

  • nek0d3r@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    8 months ago

    Generative AI does not work like this. They’re not like humans at all, it will regurgitate whatever input it receives, like how Google can’t stop Gemini from telling people to put glue in their pizza. If it really worked like that, there wouldn’t be these broad and extensive policies within tech companies about using it with company sensitive data like protection compliances. The day that a health insurance company manager says, “sure, you can feed Chat-GPT medical data” is the day I trust genAI.

  • Kühlschrank@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    I thought the larger point was that they’re using plenty of sources that do not lie in the public domain. Like if I download a textbook to read for a class instead of buying it - I could be proscecuted for stealing. And they’ve downloaded and read millions of books without paying for them.

  • LANIK2000@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    This process is akin to how humans learn…

    I’m so fucking sick of people saying that. We have no fucking clue how humans LEARN. Aka gather understanding aka how cognition works or what it truly is. On the contrary we can deduce that it probably isn’t very close to human memory/learning/cognition/sentience (any other buzzword that are stands-ins for things we don’t understand yet), considering human memory is extremely lossy and tends to infer its own bias, as opposed to LLMs that do neither and religiously follow patters to their own fault.

    It’s quite literally a text prediction machine that started its life as a translator (and still does amazingly at that task), it just happens to turn out that general human language is a very powerful tool all on its own.

    I could go on and on as I usually do on lemmy about AI, but your argument is literally “Neural network is theoretically like the nervous system, therefore human”, I have no faith in getting through to you people.

    • ZILtoid1991@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      Even worse is, in order to further humanize machine learning systems, they often give them human-like names.

  • gcheliotis@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    8 months ago

    Though I am not a lawyer by training, I have been involved in such debates personally and professionally for many years. This post is unfortunately misguided. Copyright law makes concessions for education and creativity, including criticism and satire, because we recognize the value of such activities for human development. Debates over the excesses of copyright in the digital age were specifically about humans finding the application of copyright to the internet and all things digital too restrictive for their educational, creative, and yes, also their entertainment needs. So any anti-copyright arguments back then were in the spirit specifically of protecting the average person and public-interest non-profit institutions, such as digital archives and libraries, from big copyright owners who would sue and lobby for total control over every file in their catalogue, sometimes in the process severely limiting human potential.

    AI’s ingesting of text and other formats is “learning” in name only, a term borrowed by computer scientists to describe a purely computational process. It does not hold the same value socially or morally as the learning that humans require to function and progress individually and collectively.

    AI is not a person (unless we get definitive proof of a conscious AI, or are willing to grant every implementation of a statistical model personhood). Also AI it is not vital to human development and as such one could argue does not need special protections or special treatment to flourish. AI is a product, even more clearly so when it is proprietary and sold as a service.

    Unlike past debates over copyright, this is not about protecting the little guy or organizations with a social mission from big corporate interests. It is the opposite. It is about big corporate interests turning human knowledge and creativity into a product they can then use to sell services to - and often to replace in their jobs - the very humans whose content they have ingested.

    See, the tables are now turned and it is time to realize that copyright law, for all its faults, has never been only or primarily about protecting large copyright holders. It is also about protecting your average Joe from unauthorized uses of their work. More specifically uses that may cause damage, to the copyright owner or society at large. While a very imperfect mechanism, it is there for a reason, and its application need not be the end of AI. There’s a mechanism for individual copyright owners to grant rights to specific uses: it’s called licensing and should be mandatory in my view for the development of proprietary LLMs at least.

    TL;DR: AI is not human, it is a product, one that may augment some tasks productively, but is also often aimed at replacing humans in their jobs - this makes all the difference in how we should balance rights and protections in law.

  • helenslunch@feddit.nl
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    Those claiming AI training on copyrighted works is “theft” misunderstand key aspects of copyright law and AI technology.

    Or maybe they’re not talking about copyright law. They’re talking about basic concepts. Maybe copyright law needs to be brought into the 21st century?

  • HereIAm@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    “This process is akin to how humans learn… The AI discards the original text, keeping only abstract representations…”

    Now I sail the high seas myself, but I don’t think Paramount Studios would buy anyone’s defence they were only pirating their movies so they can learn the general content so they can produce their own knockoff.

    Yes artists learn and inspire each other, but more often than not I’d imagine they consumed that art in an ethical way.

  • arin@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    Kids pay for books, openAI should also pay for the material access used for training.

    • FatCat@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      8 months ago

      OpenAI like other AI companies keep their data sources confidential. But there are services and commercial databases for books that people understand are commonly used in the AI industry.

      • EddoWagt@feddit.nl
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 months ago

        OpenAI like other AI companies keep their data sources confidential.

        “We trained on absolutely everything, but we won’t tell them that because it will get us in a lot of trouble”