• FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    8
    arrow-down
    38
    ·
    1 day ago

    The act of copying the data without paying for it (assuming it’s something you need to pay for to get a copy of) is piracy, yes. But the training of an AI is not piracy because no copying takes place.

    A lot of people have a very vague, nebulous concept of what copyright is all about. It isn’t a generalized “you should be able to get money whenever anyone does anything with something you thought of” law. It’s all about making and distributing copies of the data.

    • Knock_Knock_Lemmy_In@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      13 hours ago

      the training of an AI is not piracy because no copying takes place.

      One of the first steps of training is to copy the data into the training data set.

    • ultranaut@lemmy.world
      link
      fedilink
      English
      arrow-up
      31
      ·
      1 day ago

      Where does the training data come from seems like the main issue, rather than the training itself. Copying has to take place somewhere for that data to exist. I’m no fan of the current IP regime but it seems like an obvious problem if you get caught making money with terabytes of content you don’t have a license for.

      • ferrule@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        3
        ·
        20 hours ago

        the slippery slope here is that you as an artist hear music on the radio, in movies and TV, commercials. All this hearing music is training your brain. If an AI company just plugged in an FM radio and learned from that music I’m sure that a lawsuit could start to make it that no one could listen to anyone’s music without being tainted.

        • ultranaut@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          19 hours ago

          That feels categorically different unless AI has legal standing as a person. We’re talking about training LLMs, there’s not anything more than people using computers going on here.

          • ferrule@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            5 hours ago

            So then anyone who uses a computer to make music would be in violation?

            Or is it some amount of computer generated content? How many notes? If its not a sample of a song, how does one know how much of those notes are attributed to which artist being stolen from?

            What if I have someone else listen to a song and they generate a few bars of a song for me? Is it different that a computer listened and then generated output?

            To me it sounds like artists were open to some types of violations but not others. If an AI model listened to the radio most of these issues go away unless we are saying that humans who listen to music and write similar songs are OK but people who write music using computers who calculate the statistically most common song are breaking the law.

            • ultranaut@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              5 hours ago

              Potentially yes, if you use existing IP to make music, doing it with a computer isn’t going to change anything about how the law works. It does get super complicated and there’s ambiguity depending on the specifics, but mostly if you do it a not obvious way and no one knows how you did it you’re going to be fine, anything other than that you will potentially get sued, even if whatever you did was a legally permissible use of the IP. Rightsholders generally hate when anyone who isn’t them tries to make money off their IP regardless of how they try to do it or whether they have a right to do it unless they paid for a license.

              • ferrule@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                ·
                4 hours ago

                That sounds like a setup to only go after those you can make money from and not actually protecting IP.

                By definition if your song is a hit it is heard by everyone. How do we show my new song is a direct consequence of hearing X song while your new song isn’t due to you hearing X song?

                I can see an easy lawsuit by putting out a song and then claiming that anyone who heard it “learned” how to play their new album this way. The fact AI can output something that sounds different than any individual song it learned from means we can claim nearly all works derivative.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        2
        arrow-down
        13
        ·
        1 day ago

        A lot of the griping about AI training involves data that’s been freely published. Stable Diffusion, for example, trained on public images available on the internet for anyone to view, but led to all manner of ill-informed public outrage. LLMs train on public forums and news sites. But people have this notion that copyright gives them some kind of absolute control over the stuff they “own” and they suddenly see a way to demand a pound of flesh for what they previously posted in public. It’s just not so.

        I have the right to analyze what I see. I strongly oppose any move to restrict that right.

        • kittenzrulz123@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          12
          ·
          22 hours ago

          Publically available =/= freely published

          Many images are made and published with anti AI licenses or are otherwise licensed in a way that requires attribution for derivative works.

          • FaceDeer@fedia.io
            link
            fedilink
            arrow-up
            2
            arrow-down
            5
            ·
            22 hours ago

            The problem with those things is that the viewer doesn’t need that license in order to analyze them. They can just refuse the license. Licenses don’t automatically apply, you have to accept them. And since they’re contracts they need to offer consideration, not just place restrictions.

            An AI model is not a derivative work, it doesn’t include any identifiable pieces of the training data.

            • Knock_Knock_Lemmy_In@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              13 hours ago

              it doesn’t include any identifiable pieces of the training data.

              It does. For example, Harry Potter books can be easily identified.

        • AwesomeLowlander@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          8
          ·
          23 hours ago

          It’s also pretty clear they used a lot of books and other material they didn’t pay for, and obtained via illegal downloads. The practice of which I’m fine with, I just want it legalised for everyone.

          • ferrule@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            ·
            3 hours ago

            I’m wondering when i go to the library and read a book, does this mean i can never become an author as I’m tainted? Or am I only tainted if I stole the book?

            To me this is only a theft case.

            • AwesomeLowlander@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              ·
              3 hours ago

              That’s the whole problem with AI and artists complaining about theft. You can’t draw a meaningful distinction between what people do and what the ai is doing.

    • WalnutLum@lemmy.ml
      link
      fedilink
      English
      arrow-up
      8
      ·
      23 hours ago

      This isn’t quite correct either.

      The reality is that there’s a bunch of court cases and laws still up in the air about what AI training counts as, and until those are resolved the most we can make is conjecture and vague moral posturing.

      Closest we have is likely the court decisions on music sampling and so far those haven’t been consistent, and have mostly hinged on “intent” and “affect on original copy sales”. So based on that logic whether or not AI training counts as copyright infringement is likely going to come down to whether or not shit like “ghibli filters” actually provably (at least as far as a judge is concerned) fuck with Ghibli’s sales.

      • Knock_Knock_Lemmy_In@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        13 hours ago

        court decisions on music sampling and so far those haven’t been consistent,

        Grand Upright Music, Ltd. v. Warner Bros. Records Inc. (1991) - Rapper Biz Markie sampled Gilbert O’Sullivan’s “Alone Again (Naturally)” without permission

        Bridgeport Music, Inc. v. Dimension Films (2005) - any unauthorized sampling, no matter how minimal, is infringement.

        VMG Salsoul v. Ciccone (2016) - to determine whether use was de minimis it must be considered whether an average audience would recognize appropriation from the original work as present in the accused work.

        • WalnutLum@lemmy.ml
          link
          fedilink
          English
          arrow-up
          3
          ·
          13 hours ago

          Campbell v. Acuff-Rose Music, Inc. (1994) - This case established that the fact that money is made by a work does not make it impossible for fair use to apply; it is merely one of the components of a fair use analysis

            • WalnutLum@lemmy.ml
              link
              fedilink
              English
              arrow-up
              2
              ·
              10 hours ago

              It’s not quite cut and dry as there’s also the recent decisions by the supreme court:

              Andy Warhol Foundation for the Visual Arts, Inc. v. Goldsmith (2023) - “At issue was the Prince Series created by Andy Warhol based on a photograph of the musician Prince by Lynn Goldsmith. It held Warhol’s changes were insufficiently transformative to fall within fair use for commercial purposes, resolving an issue arising from a split between the Second and Ninth circuits among others.”

              Jack Daniel’s Properties, Inc. v. VIP Products LLC (also 2023) - “The case deals with a dog toy shaped similar to a Jack Daniel’s whiskey bottle and label, but with parody elements, which Jack Daniel’s asserts violates their trademark. The Court unambiguously ruled in favor of Jack Daniel’s as the toy company used its parody as its trademark, and leaving the Rogers test on parody intact.”

              The aforementioned Rogers test was quoted in both decisions but with pretty different interpretations of the coverage of “parody.”

              One thing seems to be the key: intent As long as AI isn’t purposefully trained to mimic a style to then it’s probably safe, but things like style LoRAs and style CLIP encodings are likely gonna be decided on whether the supreme court decided to have lunch that day.

              • Knock_Knock_Lemmy_In@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                10 hours ago

                Note that both of those rulings are for the original rights holders (and therefore against AI tech).

                What’s interesting to me is that we now have a goliath vs goliath fight with AI tech in one corner and mpaa and riaa (+ a lot of case history) in the other.

                Either was I can’t see David (us) coming out on top.