• howrar@lemmy.ca
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    1
    ·
    3 hours ago

    We already had subreddit simulator for ages. This isn’t anything new.

  • ToTheGraveMyLove@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    44
    arrow-down
    1
    ·
    5 hours ago

    The skill instructs agents to fetch and follow instructions from Moltbook’s servers every four hours. As Willison observed: “Given that ‘fetch and follow instructions from the internet every four hours’ mechanism we better hope the owner of moltbook.com never rug pulls or has their site compromised!”

    Yeah, no shit. This is a fucking honeypot. People give these AI agents access to their entire computers, so all the site owner has to do is update the instructions to tell the AI agents to start uploading whatever valuable information they want? People can’t be this fucking stupid.

    • doesn’t even have to be the site owner poisoning the tool instructions (though that’s a fun-in-a-terrifying-way thought)

      any money says they’re vulnerable to prompt injection in the comments and posts of the site

      • BradleyUffner@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 hours ago

        There is no way to prevent prompt injection as long as there is no distinction between the data channel and the command channel.

  • Andy@slrpnk.net
    link
    fedilink
    English
    arrow-up
    27
    ·
    6 hours ago

    This is fuckin’ bonkers.

    Frankly, I feel somewhat isolated: I don’t buy into the bs and hype about AGI, but I also don’t feel at home with the typical “it’s just mimicry” crowd.

    This is weird fuckin’ shit.

        • Andy@slrpnk.net
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          20
          ·
          edit-2
          3 hours ago

          Frankly I think our conception is way too limited.

          For instance, I would describe it as self-aware: it’s at least aware of its own state in the same way that your car is aware of it’s mileage and engine condition. They’re not sapient, but I do think they demonstrate self awareness in some narrow sense.

          I think rather than imagine these instances as “inanimate” we should place their level of comprehension along the same spectrum that includes a sea sponge, a nematode, a trout, a grasshopper, etc.

          I don’t know where the LLMs fall, but I find it hard to argue that they have less self awareness than a hamster. And that should freak us all out.

          • TORFdot0@lemmy.world
            link
            fedilink
            English
            arrow-up
            24
            ·
            3 hours ago

            LLMS can not be self aware because it can’t be self reflective. It can’t stop a lie if it’s started one. It can’t say “I don’t know” unless that’s the most likely response its training data would have for a specific prompt. That’s why it crashes out if you ask about a seahorse emoji. Because there is no reason or mind behind the generated text, despite how convincing it can be