• istanbullu@lemmy.ml
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    1 year ago

    I don’t buy into this “AI is dangerous” hype. Humans are dangerous.

    • Thorny_Insight@lemm.ee
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      1 year ago

      AI can be dangerous. The point is not that it’s likely but that in the very unlikely event of it going rogue it can at worst have civilication ending consequences.

      Imagine how easy it is to trick a child as an adult. The difference in intelligence between a human and superintelligent AGI would be orders of magnitude greater that that.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Exactly. People try to scare into regulatory capture talking about paperclip maximizers when meanwhile it’s humans and our corporations that are literally making excess shit to the point of human extinction.

      To say nothing for how often theorizing around ‘superintelligence’ imagines the stupidest tendencies of humanity being passed on to it while denying our smartest tendencies as “uniquely human” despite existing models largely already rejecting the projected features and modeling the ‘unique’ ones like empathy.

    • xcjs@programming.dev
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      1 year ago

      I was reflecting on this myself the other day. For all my criticisms of Zuckerberg/Meta (which are very valid), they really didn’t have to release anything concerning LLaMA. They’re practically the only reason we have viable open source weights/models and an engine.

  • kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 year ago

    It’s not as good as it seems at the surface.

    It is a model squarely in the “fancy autocomplete” category along with GPT-3 and fails miserably at variations of logic puzzles in ways other contemporary models do not.

    It seems that the larger training data set allows for better modeling around the fancy autocomplete parts, but even other similarly sized models like Mistral appear to have developed better underlying critical thinking capacities when you scratch below the surface that are absent here.

    I don’t think it’s a coincidence that Meta’s lead AI researcher is one of the loudest voices criticizing the views around emergent capabilities. There seems to be a degree of self-fulfilling prophecy going on. A lot of useful learnings in the creation of Llama 3, but once other models (i.e. Mistral) also start using extended training my guess is that any apparent advantages to Llama 3 right now are going to go out the window.

  • BetaDoggo_@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    The 8B is incredible for it’s size and they’ve managed to do sane refusal training this time for the official instruct.