• kescusay@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    16 hours ago

    Like I said, I do find it useful at times. But not only shouldn’t it replace coders, it fundamentally can’t. At least, not without a fundamental rearchitecturing of how they work.

    The reason it goes down a “really bad path” is that it’s basically glorified autocomplete. It doesn’t know anything.

    On top of that, spoken and written language are very imprecise, and there’s no way for an LLM to derive what you really wanted from context clues such as your tone of voice.

    Take the phrase “fruit flies like a banana.” Am I saying that a piece of fruit might fly in a manner akin to how another piece of fruit, a banana, flies if thrown? Or am I saying that the insect called the fruit fly might like to consume a banana?

    It’s a humorous line, but my point is serious: We unintentionally speak in ambiguous ways like that all the time. And while we’ve got brains that can interpret unspoken signals to parse intended meaning from a word or phrase, LLMs don’t.

    • FreedomAdvocate@lemmy.net.au
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 hours ago

      The reason it goes down a “really bad path” is that it’s basically glorified autocomplete. It doesn’t know anything.

      Not quite true - GitHub Copilot in VS for example can be given access to your entire repo/project/etc and it then “knows” how things tie together and work together, so it can get more context for its suggestions and created code.