• Canconda@lemmy.ca
    link
    fedilink
    arrow-up
    59
    ·
    edit-2
    2 days ago

    As someone who grew up using telephones… the last part is the most human chat bot interaction I’ve ever seen.

  • groats_survivor@lemmy.world
    link
    fedilink
    arrow-up
    17
    ·
    edit-2
    2 days ago

    Me to AI: alright, I’m about to send you a two part message. Do not respond to the first message.

    AI: Gotcha! I won’t respond

    • davidgro@lemmy.world
      link
      fedilink
      arrow-up
      9
      ·
      2 days ago

      That could reasonably be interpreted as you haven’t sent the first part yet.

      But I assume it still responds like that when you do.

    • Victor@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      2 days ago

      How would it know not to respond to the first part without processing it first? The request makes no sense.

      Like telling a human, hey, don’t listen to this first part! Also don’t think about elephants!

  • thebestaquaman@lemmy.world
    link
    fedilink
    arrow-up
    14
    ·
    2 days ago

    I would think that, since it’s been recognised that these messages are costing a lot of energy (== money) to process, companies would at some point add a simple <if input == “thanks”> type filter to catch a solid portion of them. Why haven’t they?

    • Hoimo@ani.social
      link
      fedilink
      arrow-up
      1
      ·
      2 days ago

      It won’t be as simple as that and the engineers who work on these systems can only think in terms of LLM and text classification, so they’d run your message through a classifier and end the conversation if it returns a “goodbye or thanks” score above 0.8, saving exactly 0 compute power.

      • thebestaquaman@lemmy.world
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        2 days ago

        I mean, even if we resort to using a neural network for checking “is the conversation finished?” That hyper-specialised NN would likely be orders of magnitude cheaper to run than your standard LLM, so you could likely save quite a bit of power/money by using it to filter for the actual LLM, no?

  • Pratai@lemmy.world
    link
    fedilink
    arrow-up
    7
    arrow-down
    3
    ·
    edit-2
    2 days ago

    Who in their right mind would want to talk to AI begin with? I don’t think I’ll ever understand why someone would want to do this outside of a single instance of curiosity.

  • SUPER SAIYAN@lemmy.world
    link
    fedilink
    arrow-up
    1
    arrow-down
    2
    ·
    edit-2
    2 days ago

    I dont understand. People argued a book with me here for using AI quoting usage of large amounts of water and noise around the data server areas. But upvoted this? And how is this a meme? Its a long screenshot.

  • bleistift2@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    15
    ·
    2 days ago

    People on lemmy bitching about how bad AI is for the environment, yet this gets upvotes.

    Doublethink 10/10

      • NihilsineNefas@slrpnk.net
        link
        fedilink
        arrow-up
        1
        ·
        2 days ago

        Wouldn’t that be the gpt user that posted this screenshot? Someone making some kind of “I’m getting back at those pesky AI companies by costing them money” … by pressing the "push to run a machine that burns the planet’ button?

        Or worse yet, it’s an accelerationist willingly pushing the button.