• Andy@slrpnk.net
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        22
        ·
        edit-2
        5 hours ago

        Frankly I think our conception is way too limited.

        For instance, I would describe it as self-aware: it’s at least aware of its own state in the same way that your car is aware of it’s mileage and engine condition. They’re not sapient, but I do think they demonstrate self awareness in some narrow sense.

        I think rather than imagine these instances as “inanimate” we should place their level of comprehension along the same spectrum that includes a sea sponge, a nematode, a trout, a grasshopper, etc.

        I don’t know where the LLMs fall, but I find it hard to argue that they have less self awareness than a hamster. And that should freak us all out.

        • TORFdot0@lemmy.world
          link
          fedilink
          English
          arrow-up
          28
          ·
          5 hours ago

          LLMS can not be self aware because it can’t be self reflective. It can’t stop a lie if it’s started one. It can’t say “I don’t know” unless that’s the most likely response its training data would have for a specific prompt. That’s why it crashes out if you ask about a seahorse emoji. Because there is no reason or mind behind the generated text, despite how convincing it can be