It’s kind of funny how AI has the exact same problems some humans have.
I always thought AI wouldn’t have that kind of problems, because they would be carefully fed accurate information.
Instead they are taught from things like Facebook and the thing formerly known as Twitter.
What an idiotic timeline we are in. LOLI thought the main issue was that AI don’t really know how to say I don’t know or second guess themselves, as it would take a lot more robust architecture with multiple feedback loops. Like a brain.
Anyway, LLM’s aren’t the only AI that do this. So them being trained on Facebook data certainly isn’t the whole issue.
Yeah it’s the old garbage in, garbage out problem, the AI algorithms don’t really understand what they are outputting.
I think at this point voice recognition and text generation AI would be more useful as something like a phone assistant. You could tell it complex things like “Mute my phone for the next 2 hours” or “Notify me if I receive an email from John Smith.” Those sort of things could be easily done by AI algorithms that A) Understand your voice and B) Are programmed to know all the features of the OS. Hopefully with a known dataset like a phone OS there shouldn’t be hallucination problems, the AI could just act as an OS concierge.
Seems Siri and Alexa could already do things like that without needing LLMs trained on Facebook shit.
What weirds me out is that the things it has issues with when generating images/video are basically a list of things lucid dreamers check on to see if they’re awake or dreaming.
-
Hands. Are your hands… Hands? Do they make sense?
-
Written language. Does it look like normal written language?
(3. Turn the lights off/4. Pinch your nose and breath through it) - these two not so much
-
How did I get here? Where was I before this? Does the transition make sense?
-
Mirrors. Are they accurate?
-
Displays on digital devices. Do they look normal?
-
Clocks. Digital and analog… Do they look like they’re telling time? Even if they do, look away and check again.
(9. Physics, try to do something physically impossible, like poking your finger through your palm. 10. Do you recognize people/do they recognize you) - on two more that aren’t relevant.
But still… It’s kinda remarkable.
Also, Nvidia launched their earth 2 earth simulator recently. So, simulation theory confirmed, I guess.
Also, check your cell phone. Despite how ubiquitous they are in our daily lives, I don’t think I’ve seen a single cell phone in my dreams. Or any other phone, for that matter.
And now that I think about it, I’ve definitely had a dream of being in my living room where there’s a TV, but I don’t remember the TV actually being in the dream.
Weird.
-
Instead they are taught from things like Facebook and the thing formerly known as Twitter.
Imagine they would teach in our schools to inform yourself about all the important things, and therefore you should read as many toilet walls as newspapers…
Right? In all science fiction, artificial intelligence starts out better than us, and the only question is whether it can capture some idiosyncratic element of “being human.” Instead, AI has started out dumber than us, and we’re all standing around saying “uh what is this good for?”
It’s insane how many people already take AI as more capable/accurate than other medium. I’m not against AI, but I’m definitely against how much of a bubble of being worshipped that some people have it in.
They can’t. AI has hallucinations. Google has shown that AI can’t even rely on external sources, either.
I’m 100% sure he can’t. Or at least, not from LLMs specifically. I’m not an expert so feel free to ignore my opinion but from what I’ve read, “hallucinations” are a feature of the way LLMs work.
As others are saying it’s 100% not possible because LLMs are (as Google optimistically describes) “creative writing aids”, or more accurately, predictive word engines. They run on mathematical probability models. They have zero concept of what the words actually mean, what humans are, or even what they themselves are. There’s no “intelligence” present except for filters that have been hand-coded in (which of course is human intelligence, not AI).
“Hallucinations” is a total misnomer because the text generation isn’t tied to reality in the first place, it’s just mathematically “what next word is most likely”.
An LLM once explained to me that it didn’t know, it simulated an answer. I found that descriptive.
all we know about ourselves is what’s in our memories. the way normal writing or talking works is just picking what words sound best in order
That’s not the whole story. “The dog swam across the ocean.” is a grammatically valid sentence with correct word order. But you probably wouldn’t write it because you have a concept of what a dog actually is and know its physiological limitations make the sentence ridiculous.
The LLMs don’t have those kind of smarts. They just blindly mirror what we do. Since humans generally don’t put those specific words together, the LLMs avoid it too, based solely on probability. If lots of people started making bold claims about oceanfaring canids (e.g. as a joke), then the LLMs would absolutely jump onboard with no critical thinking of their own.
Humans do the same thing. Have you heard of religion?
Everything these AIs output is a hallucination. Imagine if you were locked in a sensory deprivation tank, completely cut off from the outside world, and only had your brain fed the text of all books and internet sites. You would hallucinate everything about them too. You would have no idea what was real and what wasn’t because you’d lack any epistemic tools for confirming your knowledge.
That’s the biggest reason why AIs will always be bullshitters as long as their disembodied software programs running on a server. At best they can be a brain in a vat which is a pure hallucination machine.
If Apple can stop AI hallucination, any other AI company can also stop AI hallucination. Which is something they could have already done instead of making AI seem a joke on purpose. AI hallucinations are a sort of phenomena that nobody has control over. Why would Tim Cook have unique control over it?
Unless Apple became the first to figure out how, then they suddenly have a huge leg up on the rest. Which is kinda how Apple has been making their bread for most of their successes in my lifetime
eh. I don’t think Apple’s gonna be a pioneer in AI. If anybody can do it, it would be openai figuring it out first. Happy to be proven wrong tho.
Oh I’m not suggesting the will or are able to, I’m coming from a strategic standpoint
Yeah. When Apple says it’s coming into a market, they mean they have already perfected it.
Of course they can’t. Any product or feature is only as good as the data underneath it. Training data comes from the internet, and the internet is full of humans. Humans make and write weird shit so so the data that the LLM ingests is weird, this creates hallucinations.
Well yeah, its using the same dataset as MS copilot.
Spitting out inaccurate (I wish the media would stop feeding into calling it something that sounds less bad like hallucinations) answers is nothing something that will go away until the LLM gains the ability to decern context.
I’m not exaggerating when I say there’s only like a dozen true experts for generative AI on the planet and even they’re not completely sure what’s going on in that blackbox. And as far as I’m aware Tim Cook isn’t even one of them. How would he know?
I don’t know why they’re trying to shove AI down our throats. They need to take their time, allow it to evolve.
Because it’s all a corporation and a huge part of the corporate capitalist system is infinite growth. They want returns, BIG ones. When? Right the fuck now. How do you do that? Well AI would turn the world upside down like the dot-com boom. So they dump tons of money into AI. So… it’s the AI done? Oh no no no, we’re at machine leaning AI is pretty far down the road actually, what we’re firing the AI department heads and releasing this machine leaning software as 100% all the way done AI?
It’s all the same reasons section 8 housing and low cost housing don’t work under corporate capitalism. It’s profitable to take government money, it’s profitable to have low rent apartments. That’s not the problem, the problem is THEY NEED THE GROWTH NOW NOW NOW!!! If you have a choice between owning a condo where you have high wage renters, and you add another $100 to rent every year, you get more profit faster. No one wants to invest in a 10% increase over 5 years if the can invest in 12% over 4 years. So no one ever invests in low rent or section 8 housing.
If you want to have good AI, you need to spend money and send your AI to college. Have real humans interact with it, correct it’s logic, make sure it understands sarcasm and logical fallacies.
Or, you can go the cheap route: train it on 10 years of Reddit sh*tposts and hope for the best.
Tim Cook…go take your meds and watch Price is Right
They could make Siri change its voice and Genmoji based on the degree of certainty of the response:
- Trust me: Arnold as Terminator 😎
- Eehhhh, could be bullshit: shrugging old man meme 🤷🏻♂️
- Just kiddin’ here: whacky Jerry Lewis 🤪
They could sell different voice packages. Revive the ringtone market.
That’s like saying you can’t be 100% sure you never have fake news at the top of search query results. It’s just a fact.