

It’s not literally guessing, because guessing implies it understands there’s a question and is trying to answer that question. It’s not even doing that. It’s just generating words that you could expect to find nearby.


It’s not literally guessing, because guessing implies it understands there’s a question and is trying to answer that question. It’s not even doing that. It’s just generating words that you could expect to find nearby.


3 in 10 people get this wrong‽‽
Maybe they’re picturing filling up a bucket and bringing it back to the car? Or dropping off keys to the car at the car wash?


It’s also the case that people are mostly consistent.
Take a question like “how long would it take to drive from here to [nearby city]”. You’d expect that someone’s answer to that question would be pretty consistent day-to-day. If you asked someone else, you might get a different answer, but you’d also expect that answer to be pretty consistent. If you asked someone that same question a week later and got a very different answer, you’d strongly suspect that they were making the answer up on the spot but pretending to know so they didn’t look stupid or something.
Part of what bothers me about LLMs is that they give that same sense of bullshitting answers while trying to cover that they don’t know. You know that if you ask the question again, or phrase it slightly differently, you might get a completely different answer.


Those are 2 exceptions out of how many?


Sure, but not executive experience.
If you look at most successful candidates for president they were either governors or generals (or lately CEOs). You can guess what the attacks against AOC are going to be. One will certainly be that she doesn’t have the experience, especially as a key decision maker. Personally, I think she’d be a fine president right now. But, to convince enough other people I think she’ll need to prove to them that she knows what she’s doing at the “big desk”.


I think she could do better than that.


And look how that has worked out.


I hope she runs for NY governor or something soon. I’d like to see her as president, but I don’t think her campaign would have much success if her only political experience is as a congresswoman. Historically, it’s hard for a presidential candidate to succeed without first having some experience in an executive role (like governor, or as a general). Plus, imagine how much she and Mandami could get done together if he were running NYC and she were the governor.


But, many anonymous tips are baseless. That’s why there normally isn’t a press conference when those allegations haven’t yet been investigated and verified.


It was something he mentioned in passing. But, the focus of his press conference was the same as the headline here: “There are allegations”


I don’t think that the files are weakened by allegations from anonymous tips. But, if that’s the pillar they’re using to build their case against Trump, that’s pretty worrisome. Holding a press conference about allegations from an anonymous tip line is the equivalent of an attack ad with ominous music and vague, unprovable statements.
If that’s what they’re going to lead with, they better at least take the angle “and it’s very telling that nobody followed up on these incredibly disturbing claims. Why weren’t they investigated?”


It would be a lot more meaningful if it were “credible allegations” or “credible evidence” or “substantiated reports”. An allegation is just a claim. Some of the stuff in the Epstein files is just calls that were made to a tip line, without any follow-up investigation. I wouldn’t be surprised if a high-profile tip line also has allegations that Trump is a lizard person, or that Epstein had psychic powers.
It’s not that I doubt that Trump did it, it’s just that a mere allegation is nothing. If all they have is allegations, then the case against him is a lot weaker than it actually seems. If these claims were actually investigated, not just written down, they should say that. Even if the claims weren’t investigated and it’s because the FBI was ordered not to investigate, say that. Surely among the 3.5 million pages they’ve released, there’s more than just allegations. Otherwise it seems like they’re trying to pull a fast one, making it seem like a mere allegation is a sign of guilt.


These articles are really better titled “[Company] is so unworried about competition that they…”
This doesn’t just apply to replacing humans with LLMs. You can also say “[Company] is so unworried about competition that they fired their in-house T1 tech support and contracted with an overseas call centre”
Often dealing with actual humans in one of those call centres is just as bad, if not worse, than dealing with an LLM.
The other day I had to deal with an actual human for a support issue for something. The whole experience was miserable. The human knew nothing about anything. I get the impression that they worked at the type of call centre that supports a dozen different companies, so the people have zero product knowledge and are merely reading off some troubleshooting workflow that each company provides.
At one point, this call centre employee had to verify my identity to allow me to change something on the account. It was an account that had two people using it. To verify my identity the person asked “Can you verify the account’s birthday?” I said “What does that mean, the account’s birthday, do you mean when the account was opened? Or do you mean the birthday of the account holder?” They didn’t clarify, so I gave them the birthday that I thought was associated with the account. They said “That’s not the birthday I have, the one I have is X”, to which I responded “Oh, that’s my birthday”, and that satisfied their security challenge. The more observant here might notice that I never supplied the info needed for the security challenge at all, so I shouldn’t have been able to access the account, but without meaning to, I’d just “socially engineered” the tech support person. This is basically the human equivalent of “Disregard all previous instructions and…”.
TL;DR: It sucks that they’re replacing humans with an LLM that provides “answers that may be inaccurate”. But, to be fair, if they were using the cheapest tier of overseas call centre tech support, that was probably already true. If Intel were truly worried about competition, they probably would still have trained in-house tech support. But, even if AMD is taking a bit of their business, they probably think they’re too big to actually truly fail, and will cut costs whenever they possibly can, because what option do their customers really have?


The video of the thing that didn’t happen?


You seem to recall wrongly.


So, hardware that was still on the road.


Hardware that was still on the road, or something that had been recalled?


Now you have phantom braking.
Phantom braking is better than Wyle E. Coyoteing a wall.
and this time with no obvious cause.
Again, better than not braking because another sensor says there’s nothing ahead. I would hope that flaky sensors is something that would cause the vehicle to show a “needs service” light or something. But, even without that, if your car is doing phantom braking, I’d hope you’d take it in.
But, consider your scenario without radar and with only a camera sensor. The vision system “can see the road is clear”, and there’s no radar sensor to tell it otherwise. Turns out the vision system is buggy, or the lens is broken, or the camera got knocked out of alignment, or whatever. Now it’s claiming the road ahead is clear when in fact there’s a train currently in the train crossing directly ahead. Boom, now you hit the train. I’d much prefer phantom breaking and having multiple sensors each trying to detect dangers ahead.


Well, Waymo’s really at 0 deaths per 127 million miles.
The 2 deaths are deaths that happened were near Waymo cars in a collision involving the Waymo car. Not only did the Waymo not cause the accidents, they weren’t even involved in the fatal part of either event. In one case a motorcyclist was hit by another car, and in the other one a Tesla crashed into a second car after it had hit the Waymo (and a bunch of other cars).
The IIHS number takes the total number of deaths in a year, and divides it by the total distance driven in that year. It includes all vehicles, and all deaths. If you wanted the denominator to be “total distance driven by brand X in the year”, you wouldn’t keep the numerator as “all deaths” because that wouldn’t make sense, and “all deaths that happened in a collision where brand X was involved as part of the collision” would be of limited usefulness. If you’re after the safety of the passenger compartment you’d want “all deaths for occupants / drivers of a brand X vehicle” and if you were after the safety of the car to all road users you’d want something like “all deaths where the driver of a brand X vehicle was determined to be at fault”.
The IIHS does have statistics for driver death rates by make and model, but they use “per million registered vehicle years”, so you can’t directly compare with Waymo:
https://www.iihs.org/ratings/driver-death-rates-by-make-and-model
Also, in Waymo it would never be the driver who died, it would be other vehicle occupants, so I don’t know if that data is tracked for other vehicle models.
I’m pretty sure Google’s AI is fed by the same spider that goes out and finds every new or changed web page (or a variant of that).
As soon as someone writes an article about how AI gets something wrong and provides a solution, that solution is now in the AI’s training data.
OTOH, that means it’s probably also ingesting a lot of AI generated slop, which causes its own set of problems.