… in the United States, public investment in science seems to be redirected and concentrated on AI at the expense of other disciplines. And Big Tech companies are consolidating their control over the AI ecosystem. In these ways and others, AI seems to be making everything worse.

This is not the whole story. We should not resign ourselves to AI being harmful to humanity. None of us should accept this as inevitable, especially those in a position to influence science, government, and society. Scientists and engineers can push AI towards a beneficial path. Here’s how.

The essential point is that, like with the climate crisis, a vision of what positive future outcomes look like is necessary to actually get things done. Things with the technology that would make life better. They give a handful of examples and provide broad categories if activities that can help steer what is done.

  • Artisian@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    20 hours ago

    I think the argument is that, like with climate, it’s really hard to get people to just stop. They must be redirected with a new goal. “Don’t burn the rainforests” didn’t change oil company behavior.

    • givesomefucks@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      20 hours ago

      The problem is instead of finding better ways to stop it (regulations) you’re looking for “productive” ways to use it…

      Apparently because you’ve pre-emptively given up.

      But if you succeed it would lead to more AI and more damage to our planet.

      I fully understand you believe you have good intentions, I’m just struggling to find a way to explain to you that intentions don’t matter. And I don’t think I’m going to come up with a way you’ll beavle to understand.

      It’s like if someone was stuck in a hole in the ground, and instead of wanting to climb out, you yank everyone else back into the hole when they try and keep trying to get them to help you redecorate the hole.

      I truly hope someone can present that in a way that gets through to you, because you are doing real damage.

      • frongt@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        16 hours ago

        I don’t think we should stop it, exactly. Just tax it appropriately based on environmental impact.

        If that makes it prohibitively expensive, no great loss.

      • Artisian@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        18 hours ago

        Success would lead to AI use that properly accounted for its environmental impact and had to justify it’s costs. That likely means much AI use stopping, and broader reuse of models that we’ve already invested in (less competition in the space please).

        The main suggestion in the article is regulation, so I don’t feel particularly understood atm. The practical problem is that, like oil, LLM use can be done locally at a variety of scales. It also provides something that some people want a lot:

        • Additional (poorly done) labor. Sometimes that’s all you need for a project
        • Emulation of proof of work to existing infrastructure (eg, job apps)
        • Translation and communication customization

        It’s thus extremely difficult to regulate into non-existence globally (and would probably be bad if we did). So effective regulation must include persuasion and support for the folks who would most benefit from using it (or you need a huge enforcement effort, which I think has its own downsides).

        The problem is that even if everyone else leaves the hole, there will still be these users. Just like drug use, piracy, or gambling, it’s easier to regulate when we make a central easy to access service and do harm reduction. To do this you need a product that meets the needs and mitigates the harms.

        Persuading me I’m directionally wrong would require such evidence as:

        • Everyone does want to leave the hole (hard, I know people who don’t. And anti-AI messaging thus far has been more about signaling than persuasion)
        • That LLMs really can’t/can be made difficult to be done locally (hard, the Internet gives too much data, and making computing time expensive has a lot of downsides)
        • Proposed regulation that would actually be enforceable at reasonable cost (haven’t thought hard about it, maybe this is easy?)