I remember seeing a comment on here that said something along the lines of “for every dangerous or wrong response that goes public there’s probably 5, 10 or even 100 of those responses that only one person saw and may have treated as fact”
The fact that we don’t even know the ratio is the really infuriating thing.
Tech company creates best search engine —-> world domination —> becomes VC company in tech trench coat —-> destroy search engine to prop up bad investments in
artificial intelligenceadvanced chatbotsThen Hire cheap human intelligence to correct the AIs hallucinatory trash, trained from actual human generated content in the first place which the original intended audience did understand the nuanced context and meaning of in the first place. Wow more like theyve shovelled a bucket of horse manure on the pizza as well as the glue. Added value to the advertisers. AI my arse. I think calling these things language models is being generous. More like energy and data hungry vomitrons.
Calling these things Artificial Intelligence should be a crime. It’s false advertising! Intelligence requires critical thought. They possess zero critical thought. They’re stochastic parrots, whose only skill is mimicking human language, and they can only mimic convincingly when fed billions of examples.
It’s more of a Reddit Collective Intelligence (CI) than an AI.
Collective stupidity more like
It’s like they made a bot out of the subreddit confidently incorrect.
You either die a hero or live long enough to become the villain
Stealing advanced chat bots, that’s a great way to describe it.
“Many of the examples we’ve seen have been uncommon queries,”
Ah the good old “the problem is with the user not with our code” argument. The sign of a truly successful software maker.
“We don’t understand. Why aren’t people simply searching for Taylor Swift”
I tried, but it always comes up with pictures of airplanes for some reason.
I mean…I guess you could parahrase it that way. I took it more as “Look, you probably aren’t going to run into any weird answers.”. Which seems like a valid thing for them to try to convey.
(That being said, fuck AI, fuck Google, fuck reddit.)
“I’m feeling depressed” is not an uncommon query under capitalism run amok. “One Reddit user recommends jumping off the Golden Gate Bridge” is not just a weird answer, it is a wholly irresponsible one.
So, no, their response is not valid. It is entirely user-blaming in order to avoid culpability.
There are currently a lot of fake screenshots since it quickly became a meme, pretty sure this is one.
Still a fuck up in general on their part.
Fair enough. I know how easy it is to fake a Google search with inspect element. I’ve been trying to verify for myself how shitty it is, but AI Overviews don’t seem to be showing up for me (I’ve done all the correct steps to enable it, but no searches generate results).
The fact that it’s hard to tell is pretty damning, for the public perception of SGE if not for its actual capabilities.
Correcting over a decade of Reddit shitposting in what, a few weeks? They’re pretty ambitious.
This is perhaps the most ironic thing about the whole reddit data scraping thing and Spez selling out the user data of reddit to LLM’S. Like. We spent so much time posting nonsense. And then a bunch of people became mods to course correct subreddits where that nonsense could be potentially fatal. And then they got rid of those mods because they protested. And now it’s bots on bots on bots posting nonsense. And they want their LLM’S trained on that nonsense because reasons.
Isn’t the model fundamentally flawed if it can’t appropriately present arbitrary results? It is operating at a scale where human workers cannot catch every concerning result before users see them.
The ethical thing to do would be to discontinue this failed experiment. The way it presents results is demonstrably unsafe. It will continue to present satire and shitposts as suggested actions.
If you have to constantly manually intervene in what your automated solutions are doing, then it is probably not doing a very good job and it might be a good idea to go back to the drawing board.
You mean like with our economic system?
deleted by creator
That cant answer most questions though. For example, I hung a door recently and had some questions that it answered (mostly) accurately. An encyclopedia can’t tell me how to hang a door
Yeah, there’s a reason this wasn’t done before generative AI. It couldn’t handle anything slightly more specific.
Same I was dealing with a strange piece of software I searched configs and samples for hours and couldn’t find anything about anybody having any problems with the weird language they use. I finally gave up and asked gpt, it explained exactly what was going wrong and gave me half a dozen answers to try to fix it.
That cant answer most questions though.
It would make AI much more trustworthy. You cannot trust chatGPT on anything related to science because it tells you stuff like the Andromeda galaxy being inside the Milky Way. The only way to fix that is to directly program basic known science into the AI.
Google wants that to work. That’s why the “knowledge panels” kept popping up at the top of search before now with links to Wikipedia. They only want to answer the easy questions; definitions, math problems, things that they can give you the Wikipedia answer for, Yelp reviews, “Thai Food Near Me,” etc. They don’t want to answer the hard questions; presumably because it’s harder to sell ads for more niche questions and topics. And “harder” means you have to get humans involved. Which is why they’re complaining now that users are asking questions that are “too hard for our poor widdle generative AI to handle :-(”— they don’t want us to ask hard questions.
Here’s an idea google, why not set it back like it was 10-15 years ago
The problem is, the internet has adapted to the Google of a year ago, which means that setting Google search back to 2009 just means that every “SEO hacker” gets to have a field day to get spam to the top of results without any controls to prevent them.
Google built a search engine optimized for the early internet. Bad actors adapted, to siphon money out of Google traffic. Google adapted to stop them. Bad actors adapted. So began a cat-and-mouse game which ended with the pre-AI Google search we all know and hate today. Through their success, Google has destroyed the internet that was; and all that’s left is whatever this is. No matter what happens next, Google search is toast.
It’s even broader than that: historically most of the original protocols for the Internet were designed assuming people wouldn’t do bad things: for example the original e-mail protocol (SMTP) allowed anybody to connect to a an e-mail server using Telnet (a plain text, unencrypted remote comms terminal) and type a bunch of pretty si mple commands to send an e-mail as if they were any e-mail account on that domain (which was a great way for techies to prank their mates back when I was at Uni in the early 90s) and even now that a lot of it got tightenned we’re still suffering from problems like spam and phishing due to the “good faith” approach for designing what became one of the most used text communication protocol around.
If only there was a way to show the whole world in one simple example how Enshitification works.
Google execs: Hold my beer!
[…] a lot of AI companies are “selling dreams” that this tech will go from 80 percent correct to 100 percent.
In fact, Marcus thinks that last 20 percent might be the hardest thing of all.
Yeah, it’s well known, e.g. people say “the last 20% takes 80% of the effort”. All the most tedious and difficult stuff gets postponed to the end, which is why so many side projects never get completed.
It’s not just the difficult stuff, but often the mundane, e. g. stability, user friendliness, polish, scalability etc. that takes something from working in a constrained environment to an actual product - it’s a chore to work on and a lot less “sexy”, with never enough resources allocated to it: We have done all the difficult stuff already, how much more work can this be?
Turns out, a fucking lot.
Absolutely, that’s what I was thinking of when I wrote “tedious”; all the stuff you mentioned matters a lot to the user (or product owner) but isn’t the interesting stuff for a programmer.
allowing reddit to train Google’s AI was a mistake to begin with. i mean just look at reddit and the shitlord that is spez.
there are better sources and reddit is not one of them.
Isn’t that like trying to get pee out of a pool?
Id be tickled to have odd answers by mr. yankovic mself
Would you be tickled if Mr Yankovic strapped you down to some medical restraining table and then…tickled your feet with a feather???
Seems like something he’d do.
High chance of success
At this point, it seems like google is just a platform to message a google employee to go google it for you.
Is that employee named Jeeves?
…Always had been
Does anybody remember “Cha-Cha?” This was literally their model. Person asks a question via text message (this was like 2008), college student Googles the answer, follows a link, copies and pastes the answer, college student gets paid like 20¢.
Source: I was one of those college students. I never even got paid enough to get a payout before they went under.
I looove how the people at Google are so dumb that they forgot that anything resembling real intelligence in ChatGPT is just cheap labor in Africa (Kenya if I remember correctly) picking good training data. So OpenAI, using an army of smart humans and lots of data built a computer program that sometimes looks smart hahaha.
But the dumbasses in Google really drank the cool aid hahaha. They really believed that LLMs are magically smart so they feed it reddit garbage unfiltered hahahaha. Just from a PR perspective it must be a nigthmare for them, I really can’t understand what they were thinking here hahaha, is so pathetically dumb. Just goes to show that money can’t buy intelligence I guess.