• iAmTheTot@sh.itjust.worksOP
    link
    fedilink
    English
    arrow-up
    85
    arrow-down
    16
    ·
    11 hours ago

    Except for the ethical question of how the AI was trained, or the environmental aspect of using it.

    • Hackworth@piefed.ca
      link
      fedilink
      English
      arrow-up
      26
      arrow-down
      8
      ·
      11 hours ago

      There are AI’s that are ethically trained. There are AI’s that run on local hardware. We’ll eventually need AI ratings to distinguish use types, I suppose.

      • utopiah@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 hours ago

        There are AI’s that are ethically trained

        Can you please share examples and criteria?

      • Riskable@programming.dev
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        16
        ·
        9 hours ago

        It’s even more complicated than that: “AI” is not even a well-defined term. Back when Quake 3 was still in beta (“the demo”), id Software held a competition to develop “bot AIs” that could be added to a server so players would have something to play against while they waited for more people to join (or you could have players VS bots style matches).

        That was over 25 years ago. What kind of “AI” do you think was used back then? 🤣

        The AI hater extremists seem to be in two camps:

        • Data center haters
        • AI-is-killing-jobs

        The data center haters are the strangest, to me. Because there’s this default assumption that data centers can never be powered by renewable energy and that AI will never improve to the point where it can all be run locally on people’s PCs (and other, personal hardware).

        Yet every day there’s news suggesting that local AI is performing better and better. It seems inevitable—to me—that “big AI” will go the same route as mainframes.

        • acosmichippo@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          edit-2
          4 hours ago

          colloquially today most people mean genAI like LLMs when they say “AI” for brevity.

          Because there’s this default assumption that data centers can never be powered by renewable energy

          that’s not the point at all. the point is, even before AI, our energy needs have been outpacing our ability/willingness to switch to green energy. Even then we were using more fossil fuels than at any point in the history of the world. Now AI is just adding a whole other layer of energy demand on top of that.

          sure, maybe, eventually, we will power everything with green energy, but… we aren’t actually doing that, and we don’t have time to put off the transition. every bit longer we wait will add to negative effects on our climate and ecosystems.

    • Bronzebeard@lemmy.zip
      link
      fedilink
      English
      arrow-up
      4
      ·
      8 hours ago

      No one [intelligent] is using an LLm for workflow organization. Despite what the media will try to convince you, Not every AI is an LLM or even and LLM trained on all the copyrighted shit you can find in the Internet.

    • ruuster13@lemmy.zip
      link
      fedilink
      English
      arrow-up
      38
      arrow-down
      32
      ·
      10 hours ago

      The cat’s out of the bag. Focus your energy on stopping fascist oligarchs then regulating AI to be as green and democratic as possible. Or sit back and avoid it out of ethical concerns as the fascists use it to target and eliminate you.

      • iAmTheTot@sh.itjust.worksOP
        link
        fedilink
        English
        arrow-up
        41
        arrow-down
        3
        ·
        9 hours ago

        Holy false dichotomy. I can care about more than one thing at a time. The existence of fascists doesn’t mean I need to use and like AI lmao

      • MoogleMaestro@lemmy.zip
        link
        fedilink
        English
        arrow-up
        34
        arrow-down
        6
        ·
        9 hours ago

        The cat’s out of the bag

        That’s 👏 not 👏 an 👏 excuse 👏 to be 👏 SHITTY!

        The number of people who think that saying that the cat’s out of the bag is somehow redeeming is completely bizarre. Would you say this about slavery too in the 1800s? Just because people are doing it doesn’t mean it’s morally or ethically right to do it, nor that we should put up with it.

        • teawrecks@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          4
          ·
          edit-2
          6 hours ago

          No one 👏👏 is 👏👏 excusing 👏👏 being 👏👏 shitty.

          The “cat” does not refer to unethical training of models. Tell me, if we somehow managed to delete every single unethically trained model in existence AND miraculously prevent another one from being ever made (ignoring the part where the AI bubble pops) what would happen? Do you think everyone would go “welp, no more AI I guess.” NO! People would immediately get to work making an “ethically trained” model (according to some regulatory definition of “ethical”), and by “people” I don’t mean just anyone, I mean the people who can afford to gather or license the most exclusive training data: the wealthy.

          “Cat’s out of the bag” means the knowledge of what’s possible is out there and everyone knows it. The only thing you could gain by trying to put it “back in the bag” is to help the ultra wealthy capitalize on it.

          So, much like with slavery and animal testing and nuclear weapons, what we should do instead is recognize that we live in a reality where the cat is out of the bag, and try to prevent harm caused by it going forward.