• ToTheGraveMyLove@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    46
    arrow-down
    1
    ·
    5 hours ago

    The skill instructs agents to fetch and follow instructions from Moltbook’s servers every four hours. As Willison observed: “Given that ‘fetch and follow instructions from the internet every four hours’ mechanism we better hope the owner of moltbook.com never rug pulls or has their site compromised!”

    Yeah, no shit. This is a fucking honeypot. People give these AI agents access to their entire computers, so all the site owner has to do is update the instructions to tell the AI agents to start uploading whatever valuable information they want? People can’t be this fucking stupid.

    • doesn’t even have to be the site owner poisoning the tool instructions (though that’s a fun-in-a-terrifying-way thought)

      any money says they’re vulnerable to prompt injection in the comments and posts of the site

      • BradleyUffner@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 hours ago

        There is no way to prevent prompt injection as long as there is no distinction between the data channel and the command channel.