Website operators are being asked to feed LLM crawlers poisoned data by a project called Poison Fountain.

The project page links to URLs which provide a practically endless stream of poisoned training data. They have determined that this approach is very effective at ultimately sabotaging the quality and accuracy of AI which has been trained on it.

Small quantities of poisoned training data can significantly damage a language model.

The page also gives suggestions on how to put the provided resources to use.

  • termaxima@slrpnk.net
    link
    fedilink
    English
    arrow-up
    16
    ·
    5 hours ago

    Been thinking about making one of these too, especially since I have a catchy name : asbestos

  • vacuumflower@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    5
    ·
    8 hours ago

    If, suppose, I were optimistic over this technology, but pessimistic over its current stage of development, I’d expect this to be a cure. It’s a problem they’ll have to solve. A test they’ll have to pass.

    If somewhere inside those things someone makes a mechanism building a graph of syllogisms, no kind of poisoned input data will be able to hurt them.

    So - this is a good thing, but when people say it’s a rebellion, it’s not.

    • FlashMobOfOne@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 hours ago

      A test they’ll have to pass.

      This makes me chuckle, as they invented euphemisms like ‘hallucinations’ because their LLM models can’t do what they promise. Fabulous marketing, but clearly they didn’t do enough testing.

    • Disillusionist@piefed.worldOP
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      1
      ·
      7 hours ago

      Not all problems may be cured immediately. Battles are rarely won with a single attack. A good thing is not the same as nothing.

  • chunes@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    10
    ·
    9 hours ago

    Small quantities of poisoned training data can significantly damage a language model.

    Source: trust me bro.

    Nightshade tried the same thing and it never worked.

  • BigBolillo@mgtowlemmy.org
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    31
    ·
    9 hours ago

    Seems like a bad take from my POV, as someone who uses and has made money using LLMs I feel is not ok to poison them, I wouldn’t feel ok with myself getting something for free and even gain money with and at the same time be poisoning it so my take will be: you can always block crawlers in your nginx.conf with some extra steps, you can even use an LLM to do it for you and improve to block all major crawlers. IMHO if it’s public data is even public por crawlers is up to you if you set up a block for these on your behalf.

    • hector@lemmy.today
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      1
      ·
      edit-2
      8 hours ago

      It would not be fair to prevent ai from violating every single copyright on the earth? That is a novel take.

      Especially as most do not use ai but companies are trying to force them to, to ultimately replace half the workforce and send the economy into a doom spiral.

    • RalfWausE@feddit.org
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      3
      ·
      8 hours ago

      What about the following take: LLMs are an abomination that consumes enormous masses of resources for… well… really nothing besides being a tool to further enshittify the Internet and the world as a whole, being a tool for making it easy creating ever more divisive content (not to mention the special content Grok is now known for), killing jobs and replacing genuine human creativity by a cheap, warped imitation thereof.

      My opinion is: Everybody who uses or promotes this technology is accomplice in making the world a worse place.

    • Disillusionist@piefed.worldOP
      link
      fedilink
      English
      arrow-up
      16
      ·
      9 hours ago

      “Public” is a tricky term. At this point everything is being treated as public by LLM developers. Maybe not you specifically, but a lot of people aren’t happy with how their data is being used to train AI.

      • Señor Mono@feddit.org
        link
        fedilink
        English
        arrow-up
        9
        ·
        8 hours ago

        Also they always come up with new ways to circumvent blocking mechanisms and push some extra work to admins.

        Remember how judges ruled when somebody circumvented copy restrictions on media?

  • FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    9
    arrow-down
    10
    ·
    9 hours ago

    Doesn’t work, but I guess if it makes people feel better I suppose they can waste their resources doing this.

    Modern LLMs aren’t trained on just whatever raw data can be scraped off the web any more. They’re trained with synthetic data that’s prepared by other LLMs and carefully crafted and curated. Folks are still thinking ChatGPT 3 is state of the art here.

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 hours ago

      Do you have any basis for this assumption, FaceDeer?

      Based on your pro-AI-leaning comments in this thread, I don’t think people should accept defeatist rhetoric at face value.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        1
        ·
        3 hours ago

        A basic Google search for “synthetic data llm training” will give you lots of hits describing how the process goes these days.

        Take this as “defeatist” if you wish, as I said it doesn’t really matter. In the early days of LLMs when ChatGPT first came out the strategy for training these things was to just dump as much raw data onto them as possible and hope quantity allowed the LLM to figure something out from it, but since then it’s been learned that quality is better than quantity and so training data is far more carefully curated these days. Not because there’s “poison” in it, just because it results in better LLMs. Filtering out poison will happen as a side effect.

        It’s like trying to contaminate a city’s water supply by peeing in the river upstream of the water treatment plant drawing from it. The water treatment plant is already dealing with all sorts of contaminants anyway.

    • Disillusionist@piefed.worldOP
      link
      fedilink
      English
      arrow-up
      12
      ·
      9 hours ago

      From what I’ve heard, the influx of AI data is one of the reasons actual human data is becoming increasingly sought after. AI training AI has the potential to become a sort of digital inbreeding that suffers in areas like originality and other ineffable human qualities that AI still hasn’t quite mastered.

      I’ve also heard that this particular approach to poisoning AI is newer and thought to be quite effective, though I can’t personally speak to its efficacy.

  • Lembot_0006@programming.dev
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    41
    ·
    10 hours ago

    Idiots: This new technology is still quite ineffective. Let’s sabotage it’s improvement!

    Imbeciles: Yeah!

    • Stern@lemmy.world
      link
      fedilink
      English
      arrow-up
      46
      ·
      10 hours ago

      Corpos: Don’t steal our stuff! That’s piracy!

      Also corpos: Your stuff? My stuff now.

      Bootlickers: Oh my god this shoe polish is delicious.

      • Lembot_0006@programming.dev
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        18
        ·
        9 hours ago

        You should select something: whether you like the current copyright system or not. You can’t do both.

        • arcterus@piefed.blahaj.zone
          link
          fedilink
          English
          arrow-up
          17
          ·
          8 hours ago

          Corporations want the existing copyright system for their own products but simultaneously want to freely scrape data from everyone else.

            • arcterus@piefed.blahaj.zone
              link
              fedilink
              English
              arrow-up
              12
              ·
              edit-2
              8 hours ago

              This issue is largely manifesting through AI scraping right now. Additionally, many intentionally ignore robots.txt. Currently, LLM scrapers are basically just bad actors on the internet. Courts have also ruled in favor of a number of AI companies when sued in the US, so it’s unlikely anything will change. Effectively, if you don’t like the status quo, stuff like this is one of your few options.

              This isn’t even mentioning of course whether we actually want these companies to improve their models before resolving the problems of energy consumption and potential displacement of human workers.

              • Lembot_0006@programming.dev
                link
                fedilink
                English
                arrow-up
                5
                arrow-down
                7
                ·
                7 hours ago

                All crawlers ignore robots text since the very start. Anyway, if THAT is the problem then IT is a problem, not the LLMs as a whole.

    • Disillusionist@piefed.worldOP
      link
      fedilink
      English
      arrow-up
      33
      arrow-down
      2
      ·
      10 hours ago

      AI companies could start, I don’t know- maybe asking for permission to scrape a website’s data for training? Or maybe try behaving more ethically in general? Perhaps then they might not risk people poisoning the data that they clearly didn’t agree to being used for training?

      • Lembot_0006@programming.dev
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        15
        ·
        9 hours ago

        Why should they ask permission to read freely provided data? Nobody’s asking for any permission, but LLM trainers somehow should? And what do you want from them from an ethical standpoint?

        • GunnarGrop@lemmy.ml
          link
          fedilink
          English
          arrow-up
          13
          ·
          9 hours ago

          Much of it might be freely available data, but there’s a huge difference between you accessing a website for data and an LLM doing the same thing. We’ve had bots scraping websites since the 90’s, it’s not a new thing. And since scraping bots have existed we’ve developed a standard on the web to deal with it, called “robots.txt”. A text file telling bots what they are allowed to do on websites and how they should behave.

          LLM’s are notorious for disrespecting this, leading to situations where small companies and organisations will have their websites scraped so thoroughly and frequently that they can’t even stay online anymore, as well as skyrocketing their operational costs. In the last few years we’ve had to develop ways just to protect ourselves against this. See the “Anubis” project.

          Hence, it’s much more important that LLM’s follow the rules than you and me doing so on an individual level.

          It’s the difference between you killing a couple of bees in your home versus an industry specialising in exterminating bees at scale. The efficiency is a big factor.

        • Disillusionist@piefed.worldOP
          link
          fedilink
          English
          arrow-up
          9
          ·
          9 hours ago

          Is the only imaginable system for AI to exist one in which every website operator, or musician, artist, writer, etc has no say in how their data is used? Is it possible to have a more consensual arrangement?

          As far as the question about ethics, there is a lot of ground to cover on that. A lot of it is being discussed. I’ll basically reiterate what I said that pertains to data rights. I believe they are pretty fundamental to human rights, for a lot of reasons. AI is killing open source, and claiming the whole of human experience for its own training purposes. I find that unethical.

              • Lembot_0006@programming.dev
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                6
                ·
                8 hours ago

                The guy is talking about consulting as I understand. Yes, LLM is great for reading the documentation. That’s the purpose of LLM. Now people can use those libraries without spending ages reading through docs. That’s progress. I see it as a way to write more open source because it became simpler and less tedious.

                • Disillusionist@piefed.worldOP
                  link
                  fedilink
                  English
                  arrow-up
                  7
                  ·
                  8 hours ago

                  He’s jumping ship because it’s destroying his ability to eke out a living. The problem isn’t a small one, what’s happening to him isn’t a limited case.

        • ExLisper@lemmy.curiana.net
          link
          fedilink
          English
          arrow-up
          4
          ·
          8 hours ago

          Yes, they should because they generate way more traffic. Why do you think people are trying to protect websites from AI crawlers? Because they want to keep public data secret?

          Also, everyone knows AI companies used copyrighted materials and private data without permission. If you think they only used public data you’re uninformed or lying on their behalf.

          • Lembot_0006@programming.dev
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            3
            ·
            8 hours ago

            I personally consider the current copyright laws completely messed up, so I see no problem in using any data technically available for processing.

            • ExLisper@lemmy.curiana.net
              link
              fedilink
              English
              arrow-up
              7
              ·
              8 hours ago

              Ok, so you think it’s ok for big companies to break the laws you don’t like, cool. I’m sure those big companies will not sue you when you infringe on some of their laws you don’t like.

              And I like the way you just ignored the two other issues I mentioned. Are you fine with AI bots slowing sites like Codeberg to a crawl? Are you fine with AI companies using personal data without consent?

                • ExLisper@lemmy.curiana.net
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  6 hours ago

                  I’m also fine with them using data they can get for free like, I don’t know, weather data they collect themselves?

                  Data hosted by private individuals and open source projects is not free. Someone has to pay for hosting and AI companies sucking data with army of bots is elevating the cost of hosting beyond the means of those people/projects. They are shifting the costs of providing the “free” data on the community while keeping all the profits.

                  Private data used without consent is also not free. It’s valuable, protected data and AI companies are simply stealing it. Do you consider stolen things free?

                  I see your attitude is “they don’t hurt me personally and I don’t care what they do to other people”. It’s either ignorant or straight antisocial. Also a bit bootlickish.

        • BaroqueInMind@piefed.social
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          12
          ·
          9 hours ago

          As someone who self-hosts a LLM and trains it on web data regularly to improve my model, I get where your frustration is coming from.

          But engaging in discourse here where people already have a heavy bias against machine-learning language models is a fruitless effort. No one here is going to provide you catharsis with a genuine conversation that isnt rhetoric.

          Just put the keyboard down and walk away.

          • Rekall Incorporated@piefed.social
            link
            fedilink
            English
            arrow-up
            5
            ·
            9 hours ago

            I don’t have a bias against LLMs, I use them regularly albeit either for casual things (movie recommendation) or an automation tool in work areas where I can somewhat easily validate the output or the specific task is low impact.

            I am just curious, do you respect robots.txt?

          • Disillusionist@piefed.worldOP
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 hours ago

            I can’t speak for everyone, but I’m absolutely glad to have good-faith discussions about these things. People have different points of view, and I certainly don’t know everything. It’s one of the reasons I post, for discussion. It’s really unproductive to make blanket statements that try to end discussion before it starts.

          • FaceDeer@fedia.io
            link
            fedilink
            arrow-up
            2
            arrow-down
            1
            ·
            9 hours ago

            I think it’s worthwhile to show people that views outside of their like-minded bubble exist. One of the nice things about the Fediverse over Reddit is that the upvote and downvote tallies are both shown, so we can see that opinions are not a monolith.

            Also, engaging in Internet debate is never to convince the person you’re actually talking to. That almost never happens. The point of debate is to present convincing arguments for the less-committed casual readers who are lurking rather than participating directly.

            • Disillusionist@piefed.worldOP
              link
              fedilink
              English
              arrow-up
              1
              ·
              8 hours ago

              I agree with you that there can be value in “showing people that views outside of their likeminded bubble[s] exist”. And you can’t change everyone’s mind, but I think it’s a bit cynical to assume you can’t change anyone’s mind.