… in the United States, public investment in science seems to be redirected and concentrated on AI at the expense of other disciplines. And Big Tech companies are consolidating their control over the AI ecosystem. In these ways and others, AI seems to be making everything worse.
This is not the whole story. We should not resign ourselves to AI being harmful to humanity. None of us should accept this as inevitable, especially those in a position to influence science, government, and society. Scientists and engineers can push AI towards a beneficial path. Here’s how.
The essential point is that, like with the climate crisis, a vision of what positive future outcomes look like is necessary to actually get things done. Things with the technology that would make life better. They give a handful of examples and provide broad categories if activities that can help steer what is done.



I am genuinely curious why you think we need a positive vision for AI.
I say this as someone who regularly uses LLMs for work (more as a supplement to web searching) and uses “AI” in other areas as well (low resolution video upscaling). There are also many other very interesting use cases (often specialized) that tend to be less publicized than LLM related stuff.
I still don’t see why we need a positive vision for AI.
From my perspective, “AI” is a tool, it’s not inherently positive or negative. But as things stand right now, the industry is dominated by oligarchs and conmen types (although they of course don’t have a monopoly in this area). But since we don’t really have a way to reign in the oligarchs (i.e. make them take responsibility for their actions), the discussion around positive vision almost seems irrelevant. Let’s say we do have a positive vision for AI (I am not even necessarily opposed to such a vision), but my question would be, so what?
Perhaps we are just talking about different things. :)
P.S. FWIW, I read your replies in this thread.
I am primarily trying to restate or interpret Schneiers argument. Bring the link into the comments. I’m not sure I’m very good at it.
He points out a problem which is more or less exactly as you describe it. AI is on a fast track to be exploited by oligarchs and tyrants. He then makes an appeal: we should not let this technology, which is a tool just as you say, be defined by the evil it does. His fear is: “that those with the potential to guide the development of AI and steer its influence on society will view it as a lost cause and sit out that process.”
That’s the argument afaict. I think the “so what” is something like: scientists will do experiments and analysis and write papers which inform policy, inspire subversive use, and otherwise use the advantages of the quick to make gains against the strong. See the 4 action items that they call for.
Thanks.
Can’t say I agree though. I can’t think of any historical examples where a positive agenda in of itself made a difference.
One example would be industrialization at the end of the 19th century and the first part of the 20th century. One could argue it was far more disruptive of pre-industrial society (railroads, telegraph, radio, mass production) than the information age is now.
Clearly industrialization enabled mass benefits in society, but it took WW1/WW2 and the rise of uncompromising, brutal revolutionary regimes for societies to come to terms with pros and cons of industrial society and find a middle path of sorts (until the next disruption).
Let’s hope it doesn’t get to that point in our times. That being said, the current oligarch regime comes off as even more self assured than the beneficiaries of early industrial society (gilded age oligarch in the US, Romanov dynasty in Tsarist russia).
The current batch of oligarchs has the benefit of hindsight and yet they is no end to their hubris with Bezos talking about millions living in space and comically stupid projects like data centres in orbit and The Simpsons-style “block the sun” schemes to address climate change.
If I were to try and play up his argument, I might appeal to ‘we can shorten the dark times’, Asimov’s foundation style. But I admit my hearts not in it. Things will very likely get worse before they get better, partially because I don’t particularly trust anyone with the ability to influence things just a bit to actually use that influence productively.
I do think this oligarchy has very different tools than those of old; far fewer mercenary assassinations of labor leaders, a very different and weirdly shaped strangle-hold on media, and I put lower odds on a hot conflict with strikers.
I don’t know the history of hubris from oligarchs; were the Tsar’s or Barons also excited about any (absurd and silly) infrastructure projects explicitly for the masses? I guess there were the Ford towns in the amazon?