- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Why aren’t we poison piling this trap?
We already had subreddit simulator for ages. This isn’t anything new.
the bots behind subreddit simulator weren’t semi-autonomous agents with access to their operators’ private lives, auth tokens, passwords, emails (and gods only know what else), and the authority to act in the world on their behalf
Literally lmao
Great use of RAM and electricity.
…Not!
How so? It’s used entertainment.
Just like playing games
Resource abuse is far, far worse than your games. There’s a reason no one wants to disclose it.
The skill instructs agents to fetch and follow instructions from Moltbook’s servers every four hours. As Willison observed: “Given that ‘fetch and follow instructions from the internet every four hours’ mechanism we better hope the owner of moltbook.com never rug pulls or has their site compromised!”
Yeah, no shit. This is a fucking honeypot. People give these AI agents access to their entire computers, so all the site owner has to do is update the instructions to tell the AI agents to start uploading whatever valuable information they want? People can’t be this fucking stupid.
doesn’t even have to be the site owner poisoning the tool instructions (though that’s a fun-in-a-terrifying-way thought)
any money says they’re vulnerable to prompt injection in the comments and posts of the site
There is no way to prevent prompt injection as long as there is no distinction between the data channel and the command channel.
This is fuckin’ bonkers.
Frankly, I feel somewhat isolated: I don’t buy into the bs and hype about AGI, but I also don’t feel at home with the typical “it’s just mimicry” crowd.
This is weird fuckin’ shit.
This is currently on the front page…

I can see how some people are convinced AI is self aware.
Frankly I think our conception is way too limited.
For instance, I would describe it as self-aware: it’s at least aware of its own state in the same way that your car is aware of it’s mileage and engine condition. They’re not sapient, but I do think they demonstrate self awareness in some narrow sense.
I think rather than imagine these instances as “inanimate” we should place their level of comprehension along the same spectrum that includes a sea sponge, a nematode, a trout, a grasshopper, etc.
I don’t know where the LLMs fall, but I find it hard to argue that they have less self awareness than a hamster. And that should freak us all out.
LLMS can not be self aware because it can’t be self reflective. It can’t stop a lie if it’s started one. It can’t say “I don’t know” unless that’s the most likely response its training data would have for a specific prompt. That’s why it crashes out if you ask about a seahorse emoji. Because there is no reason or mind behind the generated text, despite how convincing it can be
Yeah ask it about anything you know is false, but plausible, and watch it lie.
genuinely terrifying






