Unrepentant Techno-Hermit, forever trying to make less do more.

  • 0 Posts
  • 27 Comments
Joined 2 months ago
cake
Cake day: March 8th, 2025

help-circle



  • Paying a premium for ridding yourself of institutional knowledge and existing experience, then paying again to fill the gap with ignorant novices, then paying yet again to train them to former levels of productivity while paying for the difference in the interrim: That’s government efficiency, baby!

    I mean, why pay for one thing once, when paying for the thing you already had before you threw it out four times over is clearly four times as good - just like how a double standard is twice as good as a boring singular standard. As Big Balls from DOGE would no doubt say: “That’s math”.





  • Almost certainly not, no. Evolution may work faster than once thought, but not that fast. The problem is that societal, and in particular, technological development is now vastly outstripping our ability to adapt. It’s not that people are getting dumber per se - it’s that they’re having to deal with vastly more stuff. All. The. Time. For example, consider the world as it was a scant century ago - virtually nothing in evolutionary terms. A person did not have to cope with what was going on on the other side of the planet, and probably wouldn’t even know for months if ever. Now? If an earthquake hits Paraguay, you’ll be aware in minutes.

    And you’ll be expected to care.

    Edit: Apologies. I wrote this comment as you were editing yours. It’s quite different now, but you know what you wrote previously, so I trust you’ll be able to interpret my response correctly.



  • Thank you. I appreciate you saying so.

    The thing about LLMs in particular is that - when used like this - they constitute one such grave positive feedback loop. I have no principal problem with machine learning. It can be a great tool to illuminate otherwise completely opaque relationships in large scientific datasets for example, but a polynomial binary space partitioning of a hyper-dimensional phase space is just a statistical knowledge model. It does not have opinions. All it can do is to codify what appears to be the consensus of the input it’s given. Even assuming - which may well be far too generous - that the input is truly unbiased, at best all it’ll tell you is what a bunch of morons think is the truth. At worst, it’ll just tell you what you expect to hear. It’s what everybody else is already saying, after all.

    And when what people think is the truth and what they want to hear are both nuts, this kind of LLM-echo chamber suddenly becomes unfathomably dangerous.