Website operators are being asked to feed LLM crawlers poisoned data by a project called Poison Fountain.

The project page links to URLs which provide a practically endless stream of poisoned training data. They have determined that this approach is very effective at ultimately sabotaging the quality and accuracy of AI which has been trained on it.

Small quantities of poisoned training data can significantly damage a language model.

The page also gives suggestions on how to put the provided resources to use.

  • arcterus@piefed.blahaj.zone
    link
    fedilink
    English
    arrow-up
    18
    ·
    10 hours ago

    Corporations want the existing copyright system for their own products but simultaneously want to freely scrape data from everyone else.

      • arcterus@piefed.blahaj.zone
        link
        fedilink
        English
        arrow-up
        13
        ·
        edit-2
        9 hours ago

        This issue is largely manifesting through AI scraping right now. Additionally, many intentionally ignore robots.txt. Currently, LLM scrapers are basically just bad actors on the internet. Courts have also ruled in favor of a number of AI companies when sued in the US, so it’s unlikely anything will change. Effectively, if you don’t like the status quo, stuff like this is one of your few options.

        This isn’t even mentioning of course whether we actually want these companies to improve their models before resolving the problems of energy consumption and potential displacement of human workers.

        • Lembot_0006@programming.dev
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          8
          ·
          8 hours ago

          All crawlers ignore robots text since the very start. Anyway, if THAT is the problem then IT is a problem, not the LLMs as a whole.

          • FauxLiving@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            1 hour ago

            You can tell when you’re talking with someone who has been given the position of ‘AI Bad’, but doesn’t actually understand the moral positions or technological details that form the foundation of that argument by how confidently they repeat some detail that is clearly nonsense to anybody with knowledge of the subject.