• Buttons@programming.dev
    link
    fedilink
    English
    arrow-up
    41
    arrow-down
    3
    ·
    1 year ago

    If I were the reporter my next question would be:

    “Do you feel that not knowing the most basic things about your product reflects on your competence as CTO?”

    • ForgotAboutDre@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      2
      ·
      1 year ago

      Hilarious, but if the reporter asked this they would find it harder to get invites to events. Which is a problem for journalists. Unless your very well regarded for your journalism, you can’t push powerful people without risking your career.

    • RatBin@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Also about this line:

      Others, meanwhile, jumped to Murati’s defense, arguing that if you’ve ever published anything to the internet, you should be perfectly fine with AI companies gobbling it up.

      No I am not fine. When I wrote that stuff and those researches in old phpbb forums I did not do it with the knowledge of a future machine learning system eating it up without my consent. I never gave consent for that despite it being publicly available, because this would be a designation of use that wouldn’t exist back than. Many other things are also publicly available, but some a re copyrighted, on the same basis: you can publish and share content upon conditions that are defined by the creator of the content. What’s that, when I use zlibrary I am evil for pirating content but openai can do it just fine due to their huge wallets? Guess what, this will eventually creating a crisis of trust, a tragedy of the commons if you will when enough ai generated content will build the bulk of your future Internet search! Do we even want this?

    • abhibeckert@lemmy.world
      link
      fedilink
      English
      arrow-up
      29
      ·
      edit-2
      1 year ago

      Every video ever created is copyrighted.

      The question is — do they need a license? Time will tell. This is obviously going to court.

    • blazeknave@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 year ago

      I feel like at their scale, if there’s going to be a figure head marketable CTO, it’s going to be this company. If not, you’re right, and she’s lying lol

  • CosmoNova@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    2
    ·
    edit-2
    1 year ago

    I almost want to believe they legitimately do not know nor care they‘re committing a gigantic data and labour heist but the truth is they know exactly what they‘re doing and they rub it under our noses.

    • Bogasse@lemmy.ml
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      1 year ago

      Yeah, the fact that AI progress just relies on “we will make so much money that no lawsuit will consequently alter our growth” is really infuriating. The fact that general audience apparently doesn’t care is even more infuriating.

    • laxe@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      1 year ago

      Of course they know what they’re doing. Everybody knows this, how could they be the only ones that don’t?

      • toddestan@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        1 year ago

        I’d say not really, Tolkien was a writer, not an artist.

        What you are doing is violating the trademark Middle-Earth Enterprises has on the Gandalf character.

        • A_Very_Big_Fan@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          The point was that I absorbed that information to inform my “art”, since we’re equating training with stealing.

          I guess this would have been a better example lol. It’s clearly not Gandalf, but I wouldn’t have ever come up with it if I hadn’t seen that scene

  • Fedizen@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 year ago

    this is why code AND cloud services shouldn’t be copyrightable or licensable without some kind of transparency legislation to ensure people are honest. Either forced open source or some kind of code review submission to a government authority that can be unsealed in legal disputes.

  • RatBin@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Obviously nobody fully knows where so much training data come from. They used Web scraping tool like there’s no tomorrow before, with that amount if informations you can’t tell where all the training material come from. Which doesn’t mean that the tool is unreliable, but that we don’t truly why it’s that good, unless you can somehow access all the layers of the digital brains operating these machines; that isn’t doable in closed source model so we can only speculate. This is what is called a black box and we use this because we trust the output enough to do it. Knowing in details the process behind each query would thus be taxing. Anyway…I’m starting to see more and more ai generated content, YouTube is slowly but surely losing significance and importance as I don’t search informations there any longer, ai being one of the reasons for this.

    • qaz@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      They use awkward stills to generate clicks

      It’s annoying and distracting, just like the headline.

  • Politically Incorrect@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    3
    ·
    1 year ago

    Watching a video or reading an article by a human isn’t copyright infringement, why then if an “AI” do it then it is? I believe the copyright infringement it’s made by the prompt so by the user not the tool.

    • topinambour_rex@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 year ago

      What does this human is going to do with this reading ? Are they going to produce something by using part of this book or this article ?

      If yes, that’s copyright infringement.

    • echo64@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      3
      ·
      1 year ago

      If you read an article, then copy parts of that article into a new article, that’s copyright infringement. Same with ais.

      • anlumo@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        1 year ago

        Depends on how much is copied, if it’s a small amount it’s fair use.

        • FireTower@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          1 year ago

          Fair use is a four factor test amount used is a factor but a low amount being used doesn’t strictly mean something is fair use. You could use a single frame of a movie and have it not qualify as fair use.

        • echo64@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          1 year ago

          Fair use depends on a lot, and just being a small amount doesn’t factor in. It’s the actual use. Small amounts just often fly under the nose of legal teams.

  • Gakomi@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    4
    ·
    1 year ago

    Any company CEO does not know shit that goes on in the dev department so her answer does not surprise me, ask the Devs or the team leader in charge of the project. The CEO is only there to make sure the company makes money as he and the share holders only care about money!

      • Gakomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        She should but she does not as I mention in another post anyone at team leader or above in all the companies that I work so far bearly had any technical skill and didn’t have any idea about this shit, only some bits and pieces that they got through some documentation that the dev team made. They had some vague idea of how our infrastructure works but that about it.

  • dezmd@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    18
    ·
    1 year ago

    LLM is just another iteration of Search. Search engines do the same thing. Do we outlaw search engines?

    • AliasAKA@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      2
      ·
      1 year ago

      SoRA is a generative video model, not exactly a large language model.

      But to answer your question: if all LLMs did was redirect you to where the content was hosted, then it would be a search engine. But instead they reproduce what someone else was hosting, which may include copyrighted material. So they’re fundamentally different from a simple search engine. They don’t direct you to the source, they reproduce a facsimile of the source material without acknowledging or directing you to it. SoRA is similar. It produces video content, but it doesn’t redirect you to finding similar video content that it is reproducing from. And we can argue about how close something needs to be to an existing artwork to count as a reproduction, but I think for AI models we should enforce citation models.

      • dezmd@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        11
        ·
        1 year ago

        How does a search engine know where to point you? It injests all that data and processes it ‘locally’ on the search engines systems using algorithms to organize the data for search. It’s effectively the same dataset.

        LLM is absolutely another iteration of Search, with natural language ouput for the same input data. Are you advocating against search engine data injest as not fair use and copyright violations as well?

        You equate LLM to Intelligence which it is not. It is algorithmic search interation with natural language responses, but that doesn’t sound as cool as AI. It’s neat, it’s useful, and yes, it should cite the sourcing details (upon request), but it’s not (yet?) a real intelligence and is equal to search in terms of fair use and copyright arguments.

        • AliasAKA@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          2
          ·
          1 year ago

          I never equated LLMs to intelligence. And indexing the data is not the same as reproducing the webpage or the content on a webpage. For you to get beyond a small snippet that held your query when you search, you have to follow a link to the source material. Now of course Google doesn’t like this, so they did that stupid amp thing, which has its own issues and I disagree with amp as a general rule as well. So, LLMs can look at the data, I just don’t think they can reproduce that data without attribution (or payment to the original creator). Perplexity.ai is a little better in this regard because it does link back to sources and is attempting to be a search engine like entity. But OpenAI is not in almost all cases.

    • dantheclamman@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      I feel conflicted about the whole thing. Technically it’s a model. I don’t feel that people should be able to sue me as a scientist for making a model based on publicly available data. I myself am merely trying to use the model itself to explain stuff about the world. But OpenAI are also selling access to the outputs of the model, that can very closely approximate the intellectual property of people. Also, most of the training data was accessed via scraping and other gray market methods that were often explicitly violating the TOU of the various places they scraped from. So it all is very difficult to sort through ethically.