… in the United States, public investment in science seems to be redirected and concentrated on AI at the expense of other disciplines. And Big Tech companies are consolidating their control over the AI ecosystem. In these ways and others, AI seems to be making everything worse.
This is not the whole story. We should not resign ourselves to AI being harmful to humanity. None of us should accept this as inevitable, especially those in a position to influence science, government, and society. Scientists and engineers can push AI towards a beneficial path. Here’s how.
The essential point is that, like with the climate crisis, a vision of what positive future outcomes look like is necessary to actually get things done. Things with the technology that would make life better. They give a handful of examples and provide broad categories if activities that can help steer what is done.
You know what else would make life better for people?
Accessible healthcare…
You know why that’s better than AI? We don’t need to burn the planet down to use it after spending billions to get it going
I’m not convinced that “AI” is even what it’s meant to be. Worse, I think scenarios of success are already drawn up in stories and science fiction - and 2025 AI suggests we’re not even close.
Now that more information is available concerning the US governments private recollections and thoughts surrounding their military activities in Afghanistan, I’m suspicious that this AI is a “campaign”. It’s simply another game of sleight of hand or pump and dump maneuver. The US remains a major currency reserve, but successive governments over the last 20 years have been incompetent, and the country has been mismanaged for far longer than anyone expected.
With the US signalling strongly that they are giving up competing with China on advanced technologies like renewables and batteries, there’s little else left besides the promise that AI will somehow swoop in and fix it all. But as netizens already point out, capitalist corporations cannot “benefit” from AI without taking advantage of its promise - taking jobs away from humans.
Sadly “AI”, or whatever you want to call it, is an interesting tool, but that still requires supervision or human oversight. AI is not the magic promised for all the countless billions spent, water burned, and energy depleted. I think the world is starting to grow suspicious, and the US faces a market correction due to fears of the AI bubble.
Perhaps AI’s promise remains, but how its pursued gives the impression of another American scam.
I strongly agree. But I also see the pragmatics: we have already spent the billions, there is (anti labor, anti equality) demand for AI, and bad actors will spam any system that took novel text generation as proof of humanity.
So yes, we need a positive vision for AI so we can deal with these problems. For the record, AI has applications in healthcare accessibility. Translation, and navigation of beurocracy (including automating the absurd hoops insurance companies insist on. Make insurance companies deal with the slop) come immediately to mind.
So yes, we need a positive vision for AI so we can deal with these problems
I am genuinely curious why you think we need a positive vision for AI.
I say this as someone who regularly uses LLMs for work (more as a supplement to web searching) and uses “AI” in other areas as well (low resolution video upscaling). There are also many other very interesting use cases (often specialized) that tend to be less publicized than LLM related stuff.
I still don’t see why we need a positive vision for AI.
From my perspective, “AI” is a tool, it’s not inherently positive or negative. But as things stand right now, the industry is dominated by oligarchs and conmen types (although they of course don’t have a monopoly in this area). But since we don’t really have a way to reign in the oligarchs (i.e. make them take responsibility for their actions), the discussion around positive vision almost seems irrelevant. Let’s say we do have a positive vision for AI (I am not even necessarily opposed to such a vision), but my question would be, so what?
Perhaps we are just talking about different things. :)
P.S. FWIW, I read your replies in this thread.
I am primarily trying to restate or interpret Schneiers argument. Bring the link into the comments. I’m not sure I’m very good at it.
He points out a problem which is more or less exactly as you describe it. AI is on a fast track to be exploited by oligarchs and tyrants. He then makes an appeal: we should not let this technology, which is a tool just as you say, be defined by the evil it does. His fear is: “that those with the potential to guide the development of AI and steer its influence on society will view it as a lost cause and sit out that process.”
That’s the argument afaict. I think the “so what” is something like: scientists will do experiments and analysis and write papers which inform policy, inspire subversive use, and otherwise use the advantages of the quick to make gains against the strong. See the 4 action items that they call for.
Thanks.
Can’t say I agree though. I can’t think of any historical examples where a positive agenda in of itself made a difference.
One example would be industrialization at the end of the 19th century and the first part of the 20th century. One could argue it was far more disruptive of pre-industrial society (railroads, telegraph, radio, mass production) than the information age is now.
Clearly industrialization enabled mass benefits in society, but it took WW1/WW2 and the rise of uncompromising, brutal revolutionary regimes for societies to come to terms with pros and cons of industrial society and find a middle path of sorts (until the next disruption).
Let’s hope it doesn’t get to that point in our times. That being said, the current oligarch regime comes off as even more self assured than the beneficiaries of early industrial society (gilded age oligarch in the US, Romanov dynasty in Tsarist russia).
The current batch of oligarchs has the benefit of hindsight and yet they is no end to their hubris with Bezos talking about millions living in space and comically stupid projects like data centres in orbit and The Simpsons-style “block the sun” schemes to address climate change.
If I were to try and play up his argument, I might appeal to ‘we can shorten the dark times’, Asimov’s foundation style. But I admit my hearts not in it. Things will very likely get worse before they get better, partially because I don’t particularly trust anyone with the ability to influence things just a bit to actually use that influence productively.
I do think this oligarchy has very different tools than those of old; far fewer mercenary assassinations of labor leaders, a very different and weirdly shaped strangle-hold on media, and I put lower odds on a hot conflict with strikers.
I don’t know the history of hubris from oligarchs; were the Tsar’s or Barons also excited about any (absurd and silly) infrastructure projects explicitly for the masses? I guess there were the Ford towns in the amazon?
we have already spent the billions
sunken cost fallacy
I think of it more like genie-out-of-lamp. It’s now very cheap to fine tune a huge model and deploy it. Policy and regulation need to deal with that fact.
Sure, we sunk billions into this thing destroying our planet and we don’t know how to profit off, but that no reason to stop or even slow down
I think the argument is that, like with climate, it’s really hard to get people to just stop. They must be redirected with a new goal. “Don’t burn the rainforests” didn’t change oil company behavior.
The problem is instead of finding better ways to stop it (regulations) you’re looking for “productive” ways to use it…
Apparently because you’ve pre-emptively given up.
But if you succeed it would lead to more AI and more damage to our planet.
I fully understand you believe you have good intentions, I’m just struggling to find a way to explain to you that intentions don’t matter. And I don’t think I’m going to come up with a way you’ll beavle to understand.
It’s like if someone was stuck in a hole in the ground, and instead of wanting to climb out, you yank everyone else back into the hole when they try and keep trying to get them to help you redecorate the hole.
I truly hope someone can present that in a way that gets through to you, because you are doing real damage.
I don’t think we should stop it, exactly. Just tax it appropriately based on environmental impact.
If that makes it prohibitively expensive, no great loss.
Success would lead to AI use that properly accounted for its environmental impact and had to justify it’s costs. That likely means much AI use stopping, and broader reuse of models that we’ve already invested in (less competition in the space please).
The main suggestion in the article is regulation, so I don’t feel particularly understood atm. The practical problem is that, like oil, LLM use can be done locally at a variety of scales. It also provides something that some people want a lot:
- Additional (poorly done) labor. Sometimes that’s all you need for a project
- Emulation of proof of work to existing infrastructure (eg, job apps)
- Translation and communication customization
It’s thus extremely difficult to regulate into non-existence globally (and would probably be bad if we did). So effective regulation must include persuasion and support for the folks who would most benefit from using it (or you need a huge enforcement effort, which I think has its own downsides).
The problem is that even if everyone else leaves the hole, there will still be these users. Just like drug use, piracy, or gambling, it’s easier to regulate when we make a central easy to access service and do harm reduction. To do this you need a product that meets the needs and mitigates the harms.
Persuading me I’m directionally wrong would require such evidence as:
- Everyone does want to leave the hole (hard, I know people who don’t. And anti-AI messaging thus far has been more about signaling than persuasion)
- That LLMs really can’t/can be made difficult to be done locally (hard, the Internet gives too much data, and making computing time expensive has a lot of downsides)
- Proposed regulation that would actually be enforceable at reasonable cost (haven’t thought hard about it, maybe this is easy?)
Oh. By Bruce Schneier.
hey, good catch, this could be a worthwhile read
EDIT: Nah. Entirely void of substance. And a shameless plug to his new book at the end. Shameful. Bruce Schneier used to be better.
Were we mad at the public technologist?
I was just a little surprised to see the familiar name but I don’t quite remember why. Maybe because of the downvotes.
The hyperscalars have a simple vision that’s easy to state:
“Agentic AI at ~$200/mo is coming to replace all your white-collar jobs”






