

I bet if these get used that soon after there will be new extreme maintenance videos that make those cell towers look like nothing. Probably some guy hanging from a powered cable climbing device, showing the things on the ground getting smaller and smaller, occasionally taking a puff from an asthma inhaler because they were told an oxygen tank would cause weight issues (it’s actually about financial issues), until enough people die that they realize it’s cheaper to pay for oxygen than training new workers.





If you want a demo on how bad these AI coding agents are, build a medium-sized script with one, something with a parse -> process -> output flow that isn’t trivial. Let it do the debug, too (like tell it the error message or the unwanted behaviour).
You’ll probably get the desired output if you’re using one of the good models.
Now ask it to review the code or optimize it.
If it was a good coding AI, this step shouldn’t involve much, as it would have been applying the same reasoning during the code writing process.
But in my experience, this isn’t what happens. For a review, it has a lot of notes. It can also find and implement optimizations. The weighs are the same, the only difference is that the context of the prompt has changed from “write code” to “optimize code”, which affects the correlations involved. There is no “write optimal code” because it’s trained on everything and the kitchen sink, so you’ll get correlations from good code, newbie coders, lesson examples of bad ways to do things (especially if it’s presented in a “discovery” format where a prof intended to talk about why this slide is bad but didn’t include that on the slide itself).