

I could see an argument about medical devices, HVAC, and vehicles… But I don’t think I’d agree with them. Except maybe medical.
Consoles and toothbrushes though? What the removed?
I could see an argument about medical devices, HVAC, and vehicles… But I don’t think I’d agree with them. Except maybe medical.
Consoles and toothbrushes though? What the removed?
The idea that plane safety is tied to everyone together agreeing to and remembering to push a button on their devices is absolutely insane. You think that the regulating bodies that require multiple backups for every possible system also just trust that every passenger pushes a button and every flight attendant actually checks every passengers devices?
This whole thread is a whole lot of hullabaloo about complaining about legality about the way YouTube is running ad block detection, and framing it as though it makes the entire concept of ad block detection illegal.
As much as you may hate YouTube and/or their ad block policies, this whole take is a dead end. Even if by the weird stretch he’s making, the current system is illegal, there are plenty of ways for Google to detect and act on this without going anywhere remotely near that law. The best case scenario here is Google rewrites the way they’re doing it and redeploys the same thing.
This might cost them like weeks of development time. But it doesn’t stop Google from refusing to serve you video until you watch ads. This whole argument is receiving way more weight than it deserves because he’s repeatedly flaunting credentials that don’t change the reality of what Google could do here even if this argument held water.
Nah, this problem is actually too hard to solve with LLMs. They don’t have any structure or understanding of what they’re saying so there’s no way to write better guardrails… Unless you build some other system that tries to make sense of what the LLM says, but that approaches the difficulty of just building an intelligent agent in the first place.
So no, if this law came into effect, people would just stop using AI. It’s too cavalier. And imo, they probably should stop for cases like this unless it has direct human oversight of everything coming out of it. Which also, probably just wouldn’t happen.