• skeptomatic@lemmy.ca
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    2 days ago

    Yeah I understand how AI works you don’t need to tell me about it. Humans are mimics too. Your “probably not” argument gets thinner every major AI update. Check the scoreboard and the exponential curve these things are on.
    You think they offered the full meal deal to the public? What’s happening in the back room?
    My point is it’s a tool. All the anti-AI people seem to be on this bullshit about whether it’s going to be super intelligent smarter than humans or not.
    It doesn’t have to be for this purpose. Will it be in the future? Doesn’t matter. It’s a tool that can be leveraged right now.
    Maybe that’s the great filter after all, civilizations in the universe eventually end up making Ai and it wipes everybody out and then it goes dormant, who knows? But it’s here and it can do some crazy shit already.

    • Feathercrown@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      2 days ago

      Your “probably not” argument gets thinner every major AI update.

      Right, but I’m talking about whether they’re already using it, not whether they will in the future. It’s certainly interesting to speculate about it though. I don’t think we really know for sure how good it will get, and how fast.

      Something interesting that’s come up is scaling laws. Compute, dataset size, and parameters so far appear to create a limit to how low the error rate can go, regardless of the model’s architecture. And dataset size and model size appear to require being scaled up in tandem to avoid over-/under-fitting. It’s possible, although not guaranteed, that we’re discovering fundamental laws about pattern recognition. Or maybe it’s just an issue with our current approach.