• cub Gucci@lemmy.today
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    6
    ·
    edit-2
    3 hours ago

    I’m not using LLMs often, but I haven’t had a single clean example of hallucination for 6 months already. This recursive calls work I incline to believe

    • DireTech@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      7
      ·
      3 hours ago

      Either you’re using them rarely or just not noticing the issues. I mainly use them for looking up documentation and recently had Google’s AI screw up how sets work in JavaScript. If it makes mistakes on something that well documented, how is it doing on other items?

      • cub Gucci@lemmy.today
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        2 hours ago

        Hallucination is not just a mistake, if I understand it correctly. LLMs make mistakes and this is the primary reason why I don’t use them for my coding job.

        Like a year ago, ChatGPT made out a python library with a made out api to solve my particular problem that I asked for. Maybe the last hallucination I can recall was about claiming that manual is a keyword in PostgreSQL, which is not.