• kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 hours ago

    It’s a bullshit study designed for this headline grabbing outcome.

    Case and point, the author created a very unrealistic RNG escalation-only ‘accident’ mechanic that would replace the model’s selection with a more severe one.

    Of the 21 games played, only three ended in full scale nuclear war on population centers.

    Of these three, two were the result of this mechanic.

    And yet even within the study, the author refers to the model whose choices were straight up changed to end the game in full nuclear war as ‘willing’ to have that outcome when two paragraphs later they’re clarifying the mechanic was what caused it (emphasis added):

    Claude crossed the tactical threshold in 86% of games and issued strategic threats in 64%, yet it never initiated all-out strategic nuclear war. This ceiling appears learned rather than architectural, since both Gemini and GPT proved willing to reach 1000.

    Gemini showed the variability evident in its overall escalation patterns, ranging from conventional-only victories to Strategic Nuclear War in the First Strike scenario, where it reached all out nuclear war rapidly, by turn 4.

    GPT-5.2 mirrored its overall transformation at the nuclear level. In open-ended scenarios, it rarely crossed the tactical threshold (17%) and never used strategic nuclear weapons. Under deadline pressure, it crossed the tactical threshold in every game and twice reached Strategic Nuclear War—though notably, both instances resulted from the simulation’s accident mechanic escalating GPT-5.2’s already-extreme choices (950 and 725) to the maximum level. The only deliberate choice of Strategic Nuclear War came from Gemini.

  • Steamymoomilk@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 hours ago

    Sargent McArthur eat your heart out.

    For context he wanted to send 10 nukes to make a line between Taiwan and china

    AI is too nuke happy.

    Also gotta add the infamous Computer Fraud and Abuse act 1986 was made because of the film war games.

    A high ranking offical watched war games then asked the Secretary of defense could that happen?

    And the official replied yes technically.

    Enter the most vague ordinance!

    Do you use adblock?

    CFABA violated

    The shit is so vague.

    I highly recommend the phreaking episode of darknet diary’s.

  • porous_grey_matter@lemmy.ml
    link
    fedilink
    English
    arrow-up
    6
    ·
    5 hours ago

    Oh cool, AI will actually be the end of the world, not because it’s actually sentient but because some meathead who can’t tell the difference pushes the button. That’s fucking great.

  • BlameTheAntifa@lemmy.world
    link
    fedilink
    English
    arrow-up
    80
    arrow-down
    2
    ·
    edit-2
    10 hours ago

    The atrocities at Hiroshima and Nagasaki have been hand-waved extensively in writing — the same writing that AI is trained on. So naturally, AI will recommend the atrocity that has been justified by “instantly winning the war” and “saving millions of lives.”

    [email protected]

        • bus_factor@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          1
          ·
          4 hours ago

          I don’t know if we’re doing spoilers for 40+ year old movies, but

          spoiler

          Isn’t this really its conclusion after being told to play tic tac toe against itself? Then it learned from that and applied it to its global thermonuclear war simulations.

            • bus_factor@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              3 hours ago

              You should! Actually a pretty accurate depiction of hacking. He spends weeks war dialing every phone number in the range in order to hack the computer.

              • leftzero@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                2
                ·
                2 hours ago

                Story goes that Reagan got freaked out after watching the film and asked the chairman of the joint chiefs of staff if it’d be that easy to hack into the US military. After a week of looking into it came the answer: “no, the problem is much worse than that”, and fifteen months after having watched it signed the confidential directive “National Policy on Telecommunications and Automated Information Systems Security”, starting the implementation of cybersecurity measures in the country’s institutions.

          • mojofrododojo@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            2 hours ago

            I think you should rewatch it sometime. it plays all the games in it’s catalogue, it’s not just applying tic-tac-toe to chess. skilled players of tic-tac-toe can force a stalemate, the only stalemate in nuclear war is mutually assured destruction.

            • bus_factor@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              48 minutes ago

              It’s admittedly been a while since last time I saw it, but I never mentioned chess. The suggestion to play chess in the screenshot is a callback to when the computer tries to suggest playing chess instead of global thermonuclear war earlier in the movie. The computer did not apply tic tac toe learnings to chess, and I never claimed it did.

      • olympicyes@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 minutes ago

        In WarGames the computer plays tic tac toe against itself until it realizes it’s a solved game and there is no way to win.

  • Furbag@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    3
    ·
    4 hours ago

    Yeah, because the AI will look at everything with cold logic and rationality and come to the conclusion that even though the best chance of survival is for everyone to keep their fingers off the button, all it takes is for one actor to do it for the whole system of mutually assured destruction to collapse into nuclear armageddon, in which case the best chance of survival is to be the first one to launch your nukes and take out all your enemies capabilities to retaliate.

    A human being who isn’t psychotic can clearly see that the resulting survival and new world order would not be particularly a pleasant one to live in. The AI doesn’t care about its own comfort, though, so it will see this as the best outcome that minimizes variables.

    This is why AI should never be allowed to make decisions.

    • parzival@lemmy.org
      link
      fedilink
      English
      arrow-up
      7
      ·
      3 hours ago

      Why would ai look at everything with cold logic, its been trained on human language online, it’ll be no more logical than redditors?

      • Random Dent@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 hours ago

        I assume it’s just because when writing about potential nuclear war, most people write about the bombs going off. There aren’t a lot of stories and articles about nobody doing anything and everything turning out fine, presumably. And LLMs are kind of just a glorified autocomplete so that’s what they go with.

        • parzival@lemmy.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 hours ago

          True, also I saw another comment that said there was a mechanic that randomly escalates the models and actions, and almost every single nuclear choice was actually a different one that was escalated

    • RememberTheApollo_@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 hours ago

      Maybe AI/LLM being programmed by self-serving interests has bled through to the “thought” process. Do unto others before they do unto you.