Website operators are being asked to feed LLM crawlers poisoned data by a project called Poison Fountain.
The project page links to URLs which provide a practically endless stream of poisoned training data. They have determined that this approach is very effective at ultimately sabotaging the quality and accuracy of AI which has been trained on it.
Small quantities of poisoned training data can significantly damage a language model.
The page also gives suggestions on how to put the provided resources to use.


All crawlers ignore robots text since the very start. Anyway, if THAT is the problem then IT is a problem, not the LLMs as a whole.
You can tell when you’re talking with someone who has been given the position of ‘AI Bad’, but doesn’t actually understand the moral positions or technological details that form the foundation of that argument by how confidently they repeat some detail that is clearly nonsense to anybody with knowledge of the subject.