Website operators are being asked to feed LLM crawlers poisoned data by a project called Poison Fountain.
The project page links to URLs which provide a practically endless stream of poisoned training data. They have determined that this approach is very effective at ultimately sabotaging the quality and accuracy of AI which has been trained on it.
Small quantities of poisoned training data can significantly damage a language model.
The page also gives suggestions on how to put the provided resources to use.
Been thinking about making one of these too, especially since I have a catchy name :
asbestosIf, suppose, I were optimistic over this technology, but pessimistic over its current stage of development, I’d expect this to be a cure. It’s a problem they’ll have to solve. A test they’ll have to pass.
If somewhere inside those things someone makes a mechanism building a graph of syllogisms, no kind of poisoned input data will be able to hurt them.
So - this is a good thing, but when people say it’s a rebellion, it’s not.
Samsung and Anthropic published independently created data showing how little bad data it takes to effectively poison very large models. LLMs pretend to be complex, but they aren’t, they’ll not continue to improve at the initial rate we got used to seeing. Just ask OpenAI.
“You’re not opposing me. All you’ve done is create a problem that will stop me until I have it figured out.” is the description of every struggle between opposing forces, so it’s interesting that you disagree with that.
A test they’ll have to pass.
This makes me chuckle, as they invented euphemisms like ‘hallucinations’ because their LLM models can’t do what they promise. Fabulous marketing, but clearly they didn’t do enough testing.
as they invented euphemisms like ‘hallucinations’
Seems like a pretty accurate word to use, no? Could also use fabrication, concoction, phantom, or something else? I think “lie” and its synonyms are not accurate, since that requires intent. Since the LLM does not have intent, it cannot “lie”.
Not all problems may be cured immediately. Battles are rarely won with a single attack. A good thing is not the same as nothing.
Small quantities of poisoned training data can significantly damage a language model.
Source: trust me bro.
Nightshade tried the same thing and it never worked.
Here’s your source: https://www.anthropic.com/research/small-samples-poison
Nightshade did work on older models. Newer models adapted to prevent poisoning.
This is a new approach.
Ye, nightshade was defeated by a blur and sharpen iirc lol. Still, was a good first step.
This is just stupid^20
Seems like a bad take from my POV, as someone who uses and has made money using LLMs I feel is not ok to poison them, I wouldn’t feel ok with myself getting something for free and even gain money with and at the same time be poisoning it so my take will be: you can always block crawlers in your nginx.conf with some extra steps, you can even use an LLM to do it for you and improve to block all major crawlers. IMHO if it’s public data is even public por crawlers is up to you if you set up a block for these on your behalf.
It would not be fair to prevent ai from violating every single copyright on the earth? That is a novel take.
Especially as most do not use ai but companies are trying to force them to, to ultimately replace half the workforce and send the economy into a doom spiral.
What about the following take: LLMs are an abomination that consumes enormous masses of resources for… well… really nothing besides being a tool to further enshittify the Internet and the world as a whole, being a tool for making it easy creating ever more divisive content (not to mention the special content Grok is now known for), killing jobs and replacing genuine human creativity by a cheap, warped imitation thereof.
My opinion is: Everybody who uses or promotes this technology is accomplice in making the world a worse place.
“Public” is a tricky term. At this point everything is being treated as public by LLM developers. Maybe not you specifically, but a lot of people aren’t happy with how their data is being used to train AI.
Also they always come up with new ways to circumvent blocking mechanisms and push some extra work to admins.
Remember how judges ruled when somebody circumvented copy restrictions on media?
Doesn’t work, but I guess if it makes people feel better I suppose they can waste their resources doing this.
Modern LLMs aren’t trained on just whatever raw data can be scraped off the web any more. They’re trained with synthetic data that’s prepared by other LLMs and carefully crafted and curated. Folks are still thinking ChatGPT 3 is state of the art here.
Do you have any basis for this assumption, FaceDeer?
Based on your pro-AI-leaning comments in this thread, I don’t think people should accept defeatist rhetoric at face value.
A basic Google search for “synthetic data llm training” will give you lots of hits describing how the process goes these days.
Take this as “defeatist” if you wish, as I said it doesn’t really matter. In the early days of LLMs when ChatGPT first came out the strategy for training these things was to just dump as much raw data onto them as possible and hope quantity allowed the LLM to figure something out from it, but since then it’s been learned that quality is better than quantity and so training data is far more carefully curated these days. Not because there’s “poison” in it, just because it results in better LLMs. Filtering out poison will happen as a side effect.
It’s like trying to contaminate a city’s water supply by peeing in the river upstream of the water treatment plant drawing from it. The water treatment plant is already dealing with all sorts of contaminants anyway.
That may be an argument if only large companies existed and they only trained foundation models.
Scraped data is most often used for fine-tuning models for specific tasks. For example, mimicking people on social media to push an ad/political agenda. Using a foundational model that speaks like it was trained on a textbook doesn’t work for synthesizing social media comments.
In order to sound like a Lemmy user, you need to train on data that contains the idioms, memes and conversational styles used in the Lemmy community. That can’t be created from the output of other models, it has to come from scraping.
Poisoning the data going to the scrapers will either kill the model during training or force everyone to pre-process their data, which increases the costs and expertise required to attempt such things.
Are you proposing flooding the Fediverse with fake bot comments in order to prevent the Fediverse from being flooded with fake bot comments? Or are you thinking more along the lines of that guy who keeps using “Þ” in place of “th”? Making the Fediverse too annoying to use for bot and human alike would be a fairly phyrric victory, I would think.
From what I’ve heard, the influx of AI data is one of the reasons actual human data is becoming increasingly sought after. AI training AI has the potential to become a sort of digital inbreeding that suffers in areas like originality and other ineffable human qualities that AI still hasn’t quite mastered.
I’ve also heard that this particular approach to poisoning AI is newer and thought to be quite effective, though I can’t personally speak to its efficacy.
Faults in replication? That can become cancer for humans. AI as well I guess.
Idiots: This new technology is still quite ineffective. Let’s sabotage it’s improvement!
Imbeciles: Yeah!
Corpos: Don’t steal our stuff! That’s piracy!
Also corpos: Your stuff? My stuff now.
Bootlickers: Oh my god this shoe polish is delicious.
Person: Says a thing
Person 2, who disagrees with the thing: YOU’RE A BOOTLICKER!
Super convincing. I’m sure you’re going to win people over to your position if you scream loud enough.
You should select something: whether you like the current copyright system or not. You can’t do both.
Corporations want the existing copyright system for their own products but simultaneously want to freely scrape data from everyone else.
I see that as a copyright problem, not a specific LLM one.
This issue is largely manifesting through AI scraping right now. Additionally, many intentionally ignore
robots.txt. Currently, LLM scrapers are basically just bad actors on the internet. Courts have also ruled in favor of a number of AI companies when sued in the US, so it’s unlikely anything will change. Effectively, if you don’t like the status quo, stuff like this is one of your few options.This isn’t even mentioning of course whether we actually want these companies to improve their models before resolving the problems of energy consumption and potential displacement of human workers.
All crawlers ignore robots text since the very start. Anyway, if THAT is the problem then IT is a problem, not the LLMs as a whole.
You can tell when you’re talking with someone who has been given the position of ‘AI Bad’, but doesn’t actually understand the moral positions or technological details that form the foundation of that argument by how confidently they repeat some detail that is clearly nonsense to anybody with knowledge of the subject.
Third thing: Point out obvious hypocrisy.
AI companies could start, I don’t know- maybe asking for permission to scrape a website’s data for training? Or maybe try behaving more ethically in general? Perhaps then they might not risk people poisoning the data that they clearly didn’t agree to being used for training?
Why should they ask permission to read freely provided data? Nobody’s asking for any permission, but LLM trainers somehow should? And what do you want from them from an ethical standpoint?
Much of it might be freely available data, but there’s a huge difference between you accessing a website for data and an LLM doing the same thing. We’ve had bots scraping websites since the 90’s, it’s not a new thing. And since scraping bots have existed we’ve developed a standard on the web to deal with it, called “robots.txt”. A text file telling bots what they are allowed to do on websites and how they should behave.
LLM’s are notorious for disrespecting this, leading to situations where small companies and organisations will have their websites scraped so thoroughly and frequently that they can’t even stay online anymore, as well as skyrocketing their operational costs. In the last few years we’ve had to develop ways just to protect ourselves against this. See the “Anubis” project.
Hence, it’s much more important that LLM’s follow the rules than you and me doing so on an individual level.
It’s the difference between you killing a couple of bees in your home versus an industry specialising in exterminating bees at scale. The efficiency is a big factor.
Is the only imaginable system for AI to exist one in which every website operator, or musician, artist, writer, etc has no say in how their data is used? Is it possible to have a more consensual arrangement?
As far as the question about ethics, there is a lot of ground to cover on that. A lot of it is being discussed. I’ll basically reiterate what I said that pertains to data rights. I believe they are pretty fundamental to human rights, for a lot of reasons. AI is killing open source, and claiming the whole of human experience for its own training purposes. I find that unethical.
Killing open source? How?!
The guy is talking about consulting as I understand. Yes, LLM is great for reading the documentation. That’s the purpose of LLM. Now people can use those libraries without spending ages reading through docs. That’s progress. I see it as a way to write more open source because it became simpler and less tedious.
He’s jumping ship because it’s destroying his ability to eke out a living. The problem isn’t a small one, what’s happening to him isn’t a limited case.
Yes, they should because they generate way more traffic. Why do you think people are trying to protect websites from AI crawlers? Because they want to keep public data secret?
Also, everyone knows AI companies used copyrighted materials and private data without permission. If you think they only used public data you’re uninformed or lying on their behalf.
I personally consider the current copyright laws completely messed up, so I see no problem in using any data technically available for processing.
Ok, so you think it’s ok for big companies to break the laws you don’t like, cool. I’m sure those big companies will not sue you when you infringe on some of their laws you don’t like.
And I like the way you just ignored the two other issues I mentioned. Are you fine with AI bots slowing sites like Codeberg to a crawl? Are you fine with AI companies using personal data without consent?
I’m fine with companies using any freely available data.
I’m also fine with them using data they can get for free like, I don’t know, weather data they collect themselves?
Data hosted by private individuals and open source projects is not free. Someone has to pay for hosting and AI companies sucking data with army of bots is elevating the cost of hosting beyond the means of those people/projects. They are shifting the costs of providing the “free” data on the community while keeping all the profits.
Private data used without consent is also not free. It’s valuable, protected data and AI companies are simply stealing it. Do you consider stolen things free?
I see your attitude is “they don’t hurt me personally and I don’t care what they do to other people”. It’s either ignorant or straight antisocial. Also a bit bootlickish.
As someone who self-hosts a LLM and trains it on web data regularly to improve my model, I get where your frustration is coming from.
But engaging in discourse here where people already have a heavy bias against machine-learning language models is a fruitless effort. No one here is going to provide you catharsis with a genuine conversation that isnt rhetoric.
Just put the keyboard down and walk away.
I don’t have a bias against LLMs, I use them regularly albeit either for casual things (movie recommendation) or an automation tool in work areas where I can somewhat easily validate the output or the specific task is low impact.
I am just curious, do you respect robots.txt?
I can’t speak for everyone, but I’m absolutely glad to have good-faith discussions about these things. People have different points of view, and I certainly don’t know everything. It’s one of the reasons I post, for discussion. It’s really unproductive to make blanket statements that try to end discussion before it starts.
It’s really unproductive to make blanket statements that try to end discussion before it starts.
I don’t know, it seems like their comment accurately predicted the response.

Even if you want to see yourself as some beacon of open and honest discussion, you have to admit that there are a lot of people who are toxic to anybody who mentions any position that isn’t rabidly anti-AI enough for them.
I think it’s worthwhile to show people that views outside of their like-minded bubble exist. One of the nice things about the Fediverse over Reddit is that the upvote and downvote tallies are both shown, so we can see that opinions are not a monolith.
Also, engaging in Internet debate is never to convince the person you’re actually talking to. That almost never happens. The point of debate is to present convincing arguments for the less-committed casual readers who are lurking rather than participating directly.
I agree with you that there can be value in “showing people that views outside of their likeminded bubble[s] exist”. And you can’t change everyone’s mind, but I think it’s a bit cynical to assume you can’t change anyone’s mind.









