Website operators are being asked to feed LLM crawlers poisoned data by a project called Poison Fountain.

The project page links to URLs which provide a practically endless stream of poisoned training data. They have determined that this approach is very effective at ultimately sabotaging the quality and accuracy of AI which has been trained on it.

Small quantities of poisoned training data can significantly damage a language model.

The page also gives suggestions on how to put the provided resources to use.

  • BaroqueInMind@piefed.social
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    13
    ·
    11 hours ago

    As someone who self-hosts a LLM and trains it on web data regularly to improve my model, I get where your frustration is coming from.

    But engaging in discourse here where people already have a heavy bias against machine-learning language models is a fruitless effort. No one here is going to provide you catharsis with a genuine conversation that isnt rhetoric.

    Just put the keyboard down and walk away.

    • Rekall Incorporated@piefed.social
      link
      fedilink
      English
      arrow-up
      6
      ·
      10 hours ago

      I don’t have a bias against LLMs, I use them regularly albeit either for casual things (movie recommendation) or an automation tool in work areas where I can somewhat easily validate the output or the specific task is low impact.

      I am just curious, do you respect robots.txt?

    • Disillusionist@piefed.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 hours ago

      I can’t speak for everyone, but I’m absolutely glad to have good-faith discussions about these things. People have different points of view, and I certainly don’t know everything. It’s one of the reasons I post, for discussion. It’s really unproductive to make blanket statements that try to end discussion before it starts.

      • FauxLiving@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        47 minutes ago

        It’s really unproductive to make blanket statements that try to end discussion before it starts.

        I don’t know, it seems like their comment accurately predicted the response.

        Even if you want to see yourself as some beacon of open and honest discussion, you have to admit that there are a lot of people who are toxic to anybody who mentions any position that isn’t rabidly anti-AI enough for them.

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      11 hours ago

      I think it’s worthwhile to show people that views outside of their like-minded bubble exist. One of the nice things about the Fediverse over Reddit is that the upvote and downvote tallies are both shown, so we can see that opinions are not a monolith.

      Also, engaging in Internet debate is never to convince the person you’re actually talking to. That almost never happens. The point of debate is to present convincing arguments for the less-committed casual readers who are lurking rather than participating directly.

      • Disillusionist@piefed.worldOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 hours ago

        I agree with you that there can be value in “showing people that views outside of their likeminded bubble[s] exist”. And you can’t change everyone’s mind, but I think it’s a bit cynical to assume you can’t change anyone’s mind.