Website operators are being asked to feed LLM crawlers poisoned data by a project called Poison Fountain.

The project page links to URLs which provide a practically endless stream of poisoned training data. They have determined that this approach is very effective at ultimately sabotaging the quality and accuracy of AI which has been trained on it.

Small quantities of poisoned training data can significantly damage a language model.

The page also gives suggestions on how to put the provided resources to use.

  • XLE@piefed.social
    link
    fedilink
    English
    arrow-up
    3
    ·
    7 hours ago

    Do you have any basis for this assumption, FaceDeer?

    Based on your pro-AI-leaning comments in this thread, I don’t think people should accept defeatist rhetoric at face value.

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      1
      ·
      4 hours ago

      A basic Google search for “synthetic data llm training” will give you lots of hits describing how the process goes these days.

      Take this as “defeatist” if you wish, as I said it doesn’t really matter. In the early days of LLMs when ChatGPT first came out the strategy for training these things was to just dump as much raw data onto them as possible and hope quantity allowed the LLM to figure something out from it, but since then it’s been learned that quality is better than quantity and so training data is far more carefully curated these days. Not because there’s “poison” in it, just because it results in better LLMs. Filtering out poison will happen as a side effect.

      It’s like trying to contaminate a city’s water supply by peeing in the river upstream of the water treatment plant drawing from it. The water treatment plant is already dealing with all sorts of contaminants anyway.

      • FauxLiving@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 hour ago

        That may be an argument if only large companies existed and they only trained foundation models.

        Scraped data is most often used for fine-tuning models for specific tasks. For example, mimicking people on social media to push an ad/political agenda. Using a foundational model that speaks like it was trained on a textbook doesn’t work for synthesizing social media comments.

        In order to sound like a Lemmy user, you need to train on data that contains the idioms, memes and conversational styles used in the Lemmy community. That can’t be created from the output of other models, it has to come from scraping.

        Poisoning the data going to the scrapers will either kill the model during training or force everyone to pre-process their data, which increases the costs and expertise required to attempt such things.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          1
          ·
          30 minutes ago

          Are you proposing flooding the Fediverse with fake bot comments in order to prevent the Fediverse from being flooded with fake bot comments? Or are you thinking more along the lines of that guy who keeps using “Þ” in place of “th”? Making the Fediverse too annoying to use for bot and human alike would be a fairly phyrric victory, I would think.

          • FauxLiving@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 minutes ago

            I am proposing neither of those things.

            The way to effectively use this is to detect scraping through established means and, instead of banning them, altering the output to feed the target poisoned data instead of/in addition to the real content.

            Banning a target gives them information about when they were detected and allows them to alter their profile to avoid that. If they’re never banned then they lose that information and also they now have to deploy additional resources to attempt to detect and remove poisoned data.

            Either way, it causes the adversary to spend a lot of resources at very little cost to you.