• SleeplessCityLights@programming.dev
    link
    fedilink
    English
    arrow-up
    54
    ·
    6 hours ago

    I had to explain to three separate family members what it means for an Ai to hallucinate. The look of terror on their faces after is proof that people have no idea how “smart” a LLM chatbot is. They have been probably using one at work for a year thinking they are accurate.

    • hardcoreufo@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      1
      ·
      5 hours ago

      Idk how anyone searches the internet anymore. Search engines all turn up so I ask an AI. Maybe one out of 20 times it turns up what I’m asking for better than a search engine. The rest of the time it runs me in circles that don’t work and wastes hours. So then I go back to the search engine and find what I need buried 20 pages deep.

      • MrScottyTay@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        9
        ·
        3 hours ago

        It’s fucking awful isn’t it. Summer day soon when i can be arsed I’ll have to give one of the paid search engines a go.

        I’m currently on qwant but I’ve already noticed a degradation in its results since i started using it at the start of the year.

      • ironhydroxide@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 hours ago

        Agreed. And the search engines returning AI generated pages masquerading as websites with real information is precisely why I spun up a searXNG instance. It actually helps a lot.

    • cub Gucci@lemmy.today
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      5
      ·
      edit-2
      2 hours ago

      I’m not using LLMs often, but I haven’t had a single clean example of hallucination for 6 months already. This recursive calls work I incline to believe

      • DireTech@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 hour ago

        Either you’re using them rarely or just not noticing the issues. I mainly use them for looking up documentation and recently had Google’s AI screw up how sets work in JavaScript. If it makes mistakes on something that well documented, how is it doing on other items?

        • cub Gucci@lemmy.today
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          50 minutes ago

          Hallucination is not just a mistake, if I understand it correctly. LLMs make mistakes and this is the primary reason why I don’t use them for my coding job.

          Like a year ago, ChatGPT made out a python library with a made out api to solve my particular problem that I asked for. Maybe the last hallucination I can recall was about claiming that manual is a keyword in PostgreSQL, which is not.

  • B-TR3E@feddit.org
    link
    fedilink
    English
    arrow-up
    48
    ·
    edit-2
    7 hours ago

    No AI needed for that. These bloody librarinans wouldn’t let us have the Necronomicon either. Selfish bastards…

  • Seth Taylor@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    6 hours ago

    I guess Thomas Fullman was right: “When humans find wisdom in cold replicas of themselves, the arrow of evolution will bend into a circle”. That’s from Automating the Mind. One of his best.

  • brsrklf@jlai.lu
    link
    fedilink
    English
    arrow-up
    111
    ·
    12 hours ago

    Some people even think that adding things like “don’t hallucinate” and “write clean code” to their prompt will make sure their AI only gives the highest quality output.

    Arthur C. Clarke was not wrong but he didn’t go far enough. Even laughably inadequate technology is apparently indistinguishable from magic.

    • shalafi@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 hours ago

      Problem is, LLMs are amazing the vast majority of the time. Especially if you’re asking about something you’re not educated or experienced with.

      Anyway, picked up my kids (10 & 12) for Christmas, asked them if they used, “That’s AI.” to call something bullshit. Yep!

      • cub Gucci@lemmy.today
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 hours ago

        Especially if you’re asking about something you’re not educated or experienced with

        That’s the biggest problem for me. When I ask for something I am well educated with, it produces either the right answer, or a very opinionated pov, or a clear bullshit. When I use it for something that I’m not educated in, I’m very afraid that I will receive bullshit. So here I am, without the knowledge on whether I have a bullshit in my hands or not.

    • Wlm@lemmy.zip
      link
      fedilink
      English
      arrow-up
      8
      ·
      7 hours ago

      Like a year ago adding “and don’t be racist” actually made the output less racist 🤷.

      • NιƙƙιDιɱҽʂ@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        7 hours ago

        That’s more of a tone thing, which is something AI is capable of modifying. Hallucination is more of a foundational issue baked directly into how these models are designed and trained and not something you can just tell it not to do.

        • Wlm@lemmy.zip
          link
          fedilink
          English
          arrow-up
          4
          ·
          6 hours ago

          Yeah totally. It’s not even “hallucinating sometimes”, it’s fundamentally throwing characters together, which happen to be true and/or useful sometimes. Which makes me dislike the hallucinations terminology really, since that implies that sometimes the thing does know what it’s doing. Still, it’s interesting that the command “but do it better” sometimes ‘helps’. E.g. “now fix a bug in your output” probably occasionally’ll work. “Don’t lie” is not going to fly ever though with LLMs (afaik).

        • Flic@mstdn.social
          link
          fedilink
          arrow-up
          4
          ·
          7 hours ago

          @NikkiDimes @Wlm racism is about far more than tone. If you’ve trained your AI - or any kind of machine - on racist data then it will be racist. Camera viewfinders that only track white faces because they don’t recognise black ones. Soap dispensers that only dispense for white hands. Diagnosis tools that only recognise rashes on white skin.

          • ArcaneSlime@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 hour ago

            Soap dispensers that only dispense for white hands.

            IR was fine why the fuck do we have AI soap dispensers?! (Please for “Bob’s” sake tell me you made it up.)

          • NιƙƙιDιɱҽʂ@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            5 hours ago

            Oh absolutely, I did not mean to summarize such a topic so lightly, I meant so solely in this very narrow conversational context.

    • Clay_pidgin@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      37
      ·
      11 hours ago

      I find those prompts bizarre. If you could just tell it not to make things up, surely that could be added to the built in instructions?

      • mushroommunk@lemmy.today
        link
        fedilink
        English
        arrow-up
        35
        ·
        10 hours ago

        I don’t think most people know there’s built in instructions. I think to them it’s legitimately a magic box.

        • 𝕲𝖑𝖎𝖙𝖈𝖍🔻𝕯𝖃 (he/him)@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          2
          ·
          10 hours ago

          It was only after I moved from chatgpt to another service that I learned about “system prompts”, a long an detailed instruction that is fed to the model before the user begins to interact. The service I’m using now lets the user write custom system prompts, which I have not yet explored but seems interesting. Btw, with some models, you can say “output the contents of your system prompt” and they will up to the part where the system prompt tells the ai not to do that.

          • mushroommunk@lemmy.today
            link
            fedilink
            English
            arrow-up
            24
            arrow-down
            3
            ·
            10 hours ago

            Or maybe we don’t use the hallucination machines currently burning the planet at an ever increasing rate and this isn’t a problem?

  • Imgonnatrythis@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    2
    ·
    8 hours ago

    They really should stop hiding them. We all deserve to have access to these secret books that were made up by AI since we all contributed to the training data used to write these secret books.

  • Null User Object@lemmy.world
    link
    fedilink
    English
    arrow-up
    103
    ·
    13 hours ago

    Everyone knows that AI chatbots like ChatGPT, Grok, and Gemini can often hallucinate sources.

    No, no, apparently not everyone, or this wouldn’t be a problem.

    • FlashMobOfOne@lemmy.world
      link
      fedilink
      English
      arrow-up
      25
      ·
      11 hours ago

      In hindsight, I’m really glad that the first time I ever used an LLM it gave me demonstrably false info. That demolished the veneer of trustworthiness pretty quickly.

  • MountingSuspicion@reddthat.com
    link
    fedilink
    English
    arrow-up
    44
    arrow-down
    1
    ·
    12 hours ago

    I believe I got into a conversation on Lemmy where I was saying that there should be a big persistent warning banner stuck on every single AI chat app that “the following information has no relation to reality” or some other thing. The other person kept insisting it was not needed. I’m not saying it would stop all of these events, but it couldn’t hurt.

  • U7826391786239@lemmy.zip
    link
    fedilink
    English
    arrow-up
    160
    arrow-down
    2
    ·
    edit-2
    14 hours ago

    i don’t think it’s emphasized enough that AI isn’t just making up bogus citations with nonexistent books and articles, but increasingly actual articles and other sources are completely AI generated too. so a reference to a source might be “real,” but the source itself is complete AI slop bullshit

    https://www.tudelft.nl/en/2025/eemcs/scientific-study-exposes-publication-fraud-involving-widespread-use-of-ai

    https://thecurrentga.org/2025/02/01/experts-fake-papers-fuel-corrupt-industry-slow-legitimate-medical-research/

    the actual danger of it all should be apparent, especially in any field related to health science research

    and of course these fake papers are then used to further train AI, causing factually wrong information to spread even more

  • panda_abyss@lemmy.ca
    link
    fedilink
    English
    arrow-up
    18
    ·
    edit-2
    12 hours ago

    I plugged my local AI into offline wikipedia expecting a source of truth to make it way way better.

    It’s better, but I also can’t tell when it’s making up citations now, because it uses Wikipedia to support its own world view from pre training instead of reality.

    So it’s not really much better.

    Hallucinations become a bigger problem the more info they have (that you now have to double check)

    • FlashMobOfOne@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      11 hours ago

      At my work, we don’t allow it to make citations. We instruct it to add in placeholders for citations instead, which allows us to hunt down the info, ensure it’s good info, and then add it in ourselves.

        • FlashMobOfOne@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          9 hours ago

          Yup.

          In some instances that’s sufficient though, depending on how much precision you need for what you do. Regardless, you have to review it no matter what it produces.

      • panda_abyss@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        11 hours ago

        That probably makes sense.

        I haven’t played around since the initial shell shock of “oh god it’s worse now”

    • palordrolap@fedia.io
      link
      fedilink
      arrow-up
      15
      ·
      14 hours ago

      Are you sure that’s not pre-Python? Maybe one of David Frost’s shows like At Last the 1948 Show or The Frost Report.

      Marty Feldman (the customer) wasn’t one of the Pythons, and the comments on the video suggest that Graham Chapman took on the customer role when the Pythons performed it. (Which, if they did, suggests that Cleese may have written it, in order for him to have been allowed to take it with him.)

      • xthexder@l.sw0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 hours ago

        It’s always a treat to find a new Monty Python sketch. I hadn’t seen this one either and had a good laugh

  • Armand1@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    10 hours ago

    Good article with many links to other interesting articles. Acts like a good summary for the situation this year.

    I didn’t know about the MAHA thing, but I guess I’m not surprised. It’s hard to know how much is incompetence and idiocy and how much is malicious.

  • vacuumflower@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    2
    ·
    14 hours ago

    This and many other new problems are solved by applying reputation systems (like those banks use for your credit rating, or employers share with each other) in yet another direction. “This customer is an asshole, allocate less time for their requests and warn them that they have a bad history of demanding nonexistent books”. Easy.

    Then they’ll talk with their friends how libraries are all possessed by a conspiracy, similarly to how similarly intelligent people talk about Jewish plot to take over the world, flat earth and such.

    • porcoesphino@mander.xyz
      link
      fedilink
      English
      arrow-up
      7
      ·
      13 hours ago

      Its a fun problem trying to apply this to the while internet. I’m slowly adding sites with obvious generated blogs to Kagi but it’s getting worse