

Couldn’t they have used ai to figure out how to make the settings less obscure instead.
I don’t mean to be difficult. I’m neurodivergent
Couldn’t they have used ai to figure out how to make the settings less obscure instead.
That’s a shadowban
It is an interesting idea. I was able to put it together from context + “rennt”. A harder example would be harder to interpret.
That can be defeated with abliteration, but I can only see it as an unfortunate outcome.
What do you mean
Inescapable consequences of letting zero-integrity optimization machines (psychopaths) run companies like this.
Of course they do this, what else are they supposed to do? It’s their nature. Expecting otherwise is idiotic.
Getting outraged by this is like getting mad at the sun for rising. But if the legal system displays this absurd sham outrage, everyone will continue to be distracted from the actual problem, which is that society has no mechanism for intercepting these individuals and keeping them away from roles where they will obviously do things like this, because of course they will.
This is permitted constantly, we keep obtaining the same result constantly, all while the people who supposedly safeguard society gape and scratch their heads like orangutans. They are utterly taken aback that allowing the same transparently stupid situation doesn’t magically start working, providing an object lesson in the meaning of stupidity itself.
That’s not true at all. Notch valued having a candy wall, and he made good on it.
The article talks of ChatGPT “inducing” this psychotic/schizoid behavior.
ChatGPT can’t do any such thing. It can’t change your personality organization. Those people were already there, at risk, masking high enough to get by until they could find their personal Messiahs.
It’s very clear to me that LLM training needs to include protections against getting dragged into a paranoid/delusional fantasy world. People who are significantly on that spectrum (as well as borderline personality organization) are routinely left behind in many ways.
This is just another area where society is not designed to properly account for or serve people with “cluster” disorders.
They’re banning 10+ year accounts over trifling things and it’s got noticeably worse this year. The widespread practice of shadowbanning makes it clear that they see users as things devoid of any inherent value, and that unlike most corporations, they’re not concerned with trying to hide it.
This is certainly not the first time this has happened. There’s nothing to stop people from asking ChatGPT et al to help them argue. I’ve done it myself, not letting it argue for me but rather asking it to find holes in my reasoning and that of my opponent. I never just pasted what it said.
I also had a guy post a ChatGPT response at me (he said that’s what it was) and although it had little to do with the point I was making, I reasoned that people must surely be doing this thousands of times a day and just not saying it’s AI.
To say nothing of state actors, “think tanks,” influence-for-hire operations, etc.
The description of the research in the article already conveys enough to replicate the experiment, at least approximately. Can anyone doubt this is commonplace, or that it has been for the last year or so?
it means peDantic
What kind of sorts do you manage
If you run it locally, your conversations don’t go anywhere.
Most Don’t Know This
replace “the joke” with “irony” and then send the image to yourself
I think it’s practical for most people to pay $2 for that
It’s great for shitposting
In 2025, AIs function more like employees. Coding AIs increasingly look like autonomous agents rather than mere assistants: taking instructions via Slack or Teams and making substantial code changes on their own, sometimes saving hours or even days.
They already lost me, not even a minute in.
It’s still a graphing calculator. It still sucks at writing code. It still breaks things when it modifies its own code. It’s still terrible at writing unit tests, and any programmer who’d let it write substantial production and test code is like a lawyer who’d send the front desk attendant to argue in court.
It also has no idea about office politics, individual personalities, corporate pathology, or anything else a human programmer realistically has to know. Partly because it has anterograde amnesia.
So, since the authors screwed that up, my guess is the rest of the article is equally useless and maybe worse.
(acts confused in French)
I like AI, sort of. But this is ghoulish.