I think still too many people missed the turning point when Microsoft suddenly stopped releasing products/software that were superior in basically all areas to their previous versions. I think that turning point was Windows 8 already, for many who consider Windows 8 a single-time mistake like ME or Vista it was Windows 10, for others it took until Windows 11 until they noticed the decline of Windows as a whole.
And it’s not just MS, but a lot of consumer tech is growing anti-consumer and gets enshittified to the point of where you really have to think hard whether or not you even want the new stuff they’re spewing out. My consumer habits have certainly changed to be much more rigorous than, say, 10-20 years ago. I read a lot more reviews these days and from many more different sources bevore I even think of buying something new.
“AI PCs” will increase your dependency on MS’ online services (which is probably the main thing that MS wants), decrease your privacy even more (also what MS wants - that’s a lot of data for sale), consume even more energy (on a planet with limited resources), sometimes increase your productivity (which is probably the most advantage you’re ever getting out of it) and other times royally screw you over (due to faulty and insecure AI behavior). Furthermore, LLMs are non-deterministic, meaning that the output (or what they’re doing) changes slightly every time you repeat even the same request. It’s just not a great idea to use that for anything where you need to TRUST its output.
I don’t think it will be a particularly good deal. And nothing MS or these other companies that are in the AI business say can ever be taken at face value or as truthful information. They’ve bullshitted their customers way too much already, way more than is usual for advertisements. If this was still the '90s or before 2010 or so - maybe they’d have a point. But this is 2026. Unless proven otherwise, we should assume bullshit by default.
I think we’re currently in a post-factual hype-only era where they are trying to sell you things that won’t ever exist in the way they describe them, but they’ll claim it will always happen “in the near future”. CEO brains probably extrapolate “Generative AI somewhat works now for some use cases so it will surely work well for all use cases within a couple of years”, so they might believe the stories they tell all day themselves, but it might just as well never happen. And even if it DID happen, you’d still suffer many drawbacks like insane vendor dependencies/lock-ins, zero privacy whatsoever, sometimes faulty and randomly changing AI behavior, and probably impossible-to-fix security holes (prompt injection and so on - LLMs have no clear boundary between data and instructions and it’s not that hard to get them to reveal secret data or do things they shouldn’t be doing in the first place. If your AI agent interprets a malicious instruction as valid, and it can act on your behalf on your system, you have a major problem).
Supposedly, according to the Microsoft article, AI PCs CoPilot+ PCs are capable of translating stuff on the fly (which sounds awesome) and generating images, all locally. Allegedly.
I have yet to run into anybody that’s actually talked about these so-called innovations though. I have a PC with Windows and the beefy GPU and I would love to get live transcriptions. But the (MS) article doesn’t even mention how I would do that…
Even if everything Microsoft promise was true, though, the lines sure are intentionally blurred between what runs locally and what doesn’t.
I think still too many people missed the turning point when Microsoft suddenly stopped releasing products/software that were superior in basically all areas to their previous versions. I think that turning point was Windows 8 already, for many who consider Windows 8 a single-time mistake like ME or Vista it was Windows 10, for others it took until Windows 11 until they noticed the decline of Windows as a whole.
And it’s not just MS, but a lot of consumer tech is growing anti-consumer and gets enshittified to the point of where you really have to think hard whether or not you even want the new stuff they’re spewing out. My consumer habits have certainly changed to be much more rigorous than, say, 10-20 years ago. I read a lot more reviews these days and from many more different sources bevore I even think of buying something new.
“AI PCs” will increase your dependency on MS’ online services (which is probably the main thing that MS wants), decrease your privacy even more (also what MS wants - that’s a lot of data for sale), consume even more energy (on a planet with limited resources), sometimes increase your productivity (which is probably the most advantage you’re ever getting out of it) and other times royally screw you over (due to faulty and insecure AI behavior). Furthermore, LLMs are non-deterministic, meaning that the output (or what they’re doing) changes slightly every time you repeat even the same request. It’s just not a great idea to use that for anything where you need to TRUST its output.
I don’t think it will be a particularly good deal. And nothing MS or these other companies that are in the AI business say can ever be taken at face value or as truthful information. They’ve bullshitted their customers way too much already, way more than is usual for advertisements. If this was still the '90s or before 2010 or so - maybe they’d have a point. But this is 2026. Unless proven otherwise, we should assume bullshit by default.
I think we’re currently in a post-factual hype-only era where they are trying to sell you things that won’t ever exist in the way they describe them, but they’ll claim it will always happen “in the near future”. CEO brains probably extrapolate “Generative AI somewhat works now for some use cases so it will surely work well for all use cases within a couple of years”, so they might believe the stories they tell all day themselves, but it might just as well never happen. And even if it DID happen, you’d still suffer many drawbacks like insane vendor dependencies/lock-ins, zero privacy whatsoever, sometimes faulty and randomly changing AI behavior, and probably impossible-to-fix security holes (prompt injection and so on - LLMs have no clear boundary between data and instructions and it’s not that hard to get them to reveal secret data or do things they shouldn’t be doing in the first place. If your AI agent interprets a malicious instruction as valid, and it can act on your behalf on your system, you have a major problem).
Supposedly, according to the Microsoft article,
AI PCsCoPilot+ PCs are capable of translating stuff on the fly (which sounds awesome) and generating images, all locally. Allegedly.I have yet to run into anybody that’s actually talked about these so-called innovations though. I have a PC with Windows and the beefy GPU and I would love to get live transcriptions. But the (MS) article doesn’t even mention how I would do that…
Even if everything Microsoft promise was true, though, the lines sure are intentionally blurred between what runs locally and what doesn’t.
Yes, and they intentionally want those lines to be as blurry as possible.