• MyMindIsLikeAnOcean@piefed.world
    link
    fedilink
    English
    arrow-up
    44
    ·
    12 hours ago

    No shit.

    I actually believed somebody when they told me it was great at writing code, and asked it to write me the code for a very simple lua mod. It’s made several errors and ended up wasting my time because I had to rewrite it.

    • morto@piefed.social
      link
      fedilink
      English
      arrow-up
      14
      ·
      5 hours ago

      In a postgraduate class, everyone was praising ai, calling it nicknames and even their friend (yes, friend), and one day, the professor and a colleague were discussing some code when I approached, and they started their routine bullying on me for being dumb and not using ai. Then I looked at his code and asked to test his core algorithm that he converted from a fortran code and “enhanced” it. I ran it with some test data and compared to the original code and the result was different! They blindly trusted some ai code that deviated from their theoretical methodology, and are publishing papers with those results!

      Even after showing the different result, they didn’t convince themselves of anything and still bully me for not using ai. Seriously, this shit became some sort of cult at this point. People are becoming irrational. If people in other universities are behaving the same and publishing like this, I’m seriously concerned for the future of science and humanity itself. Maybe we should archive everything published up to 2022, to leave as a base for the survivors from our downfall.

      • MyMindIsLikeAnOcean@piefed.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 hours ago

        The way it was described to me by some academics is that it’s useful…but only as a “research assistant” to bounce ideas off of and bring in arcane or tertiary concepts you might not have considered (after you vet them thoroughly, of course).

        The danger, as described by the same academics, is that it can act as a “buddy” who confirms you biases. It can generate truly plausible bullshit to support deeply flawed hypotheses, for example. Their main concern is it “learning” to stroke the egos of the people using it so it creates a feedback loop and it’s own bubbles of bullshit.

      • Serinus@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 hours ago

        It works well when you use it for small (or repetitive) and explicit tasks. That you can easily check.

        • ThirdConsul@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 hours ago

          According to OpenAis internal test suite and system card, hallucination rate is about 50% and the newer the model the worse it gets.

          And that fact remains unchanged on other LLM models.

        • frongt@lemmy.zip
          link
          fedilink
          English
          arrow-up
          8
          ·
          9 hours ago

          For words, it’s pretty good. For code, it often invents a reasonable-sounding function or model name that doesn’t exist.

      • ptu@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        11 hours ago

        I use it for things that are simple and monotonous to write. This way I’m able to deliver results to tasks I couldn’t have been arsed to do. I’m a data analyst and mostly use mysql and power query

      • dogdeanafternoon@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 hours ago

        What’s your preferred Hello world language? I’m gunna test this out. The more complex the code you need, the more they suck, but I’ll be amazed if it doesn’t work first try to simply print hello world.

        • xthexder@l.sw0.com
          link
          fedilink
          English
          arrow-up
          8
          ·
          edit-2
          9 hours ago

          Malbolge is a fun one

          Edit: Funny enough, ChatGPT fails to get this right, even with the answer right there on Wikipedia. When I tried running ChatGPT’s output the first few characters were correct but it errors with invalid char at 37

          • dogdeanafternoon@lemmy.ca
            link
            fedilink
            English
            arrow-up
            2
            ·
            9 hours ago

            Cheeky, I love it.

            Got correct code first try. Failed creating working docker first try. Second try worked.

            tmp="$(mktemp)"; cat >"$tmp" <<'MBEOF'
            ('&%:9]!~}|z2Vxwv-,POqponl$Hjig%eB@@>}=<M:9wv6WsU2T|nm-,jcL(I&%$#"
            `CB]V?Tx<uVtT`Rpo3NlF.Jh++FdbCBA@?]!~|4XzyTT43Qsqq(Lnmkj"Fhg${z@>
            MBEOF
            docker run --rm -v "$tmp":/code/hello.mb:ro esolang/malbolge malbolge /code/hello.mb; rm "$tmp"
            

            Output: Hello World!

            • xthexder@l.sw0.com
              link
              fedilink
              English
              arrow-up
              5
              ·
              edit-2
              8 hours ago

              I’m actually slightly impressed it got both a working program, and a different one than Wikipedia. The Wikipedia one prints “Hello, world.”

              I guess there must be another program floating around the web with “Hello World!”, since there’s no chance the LLM figured it out on its own (it kinda requires specialized algorithms to do anything)

              • dogdeanafternoon@lemmy.ca
                link
                fedilink
                English
                arrow-up
                1
                ·
                8 hours ago

                I’d never even heard of that language, so it was fun to play with.

                Definitely agree that the LLM didn’t actually figure anything out, but at least it’s not completely useless