Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.
Also includes outtakes on the ‘reasoning’ models.
Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.
Also includes outtakes on the ‘reasoning’ models.
“LLMs don’t have human understanding or metacognition”
Then what’s the (auto-completing) fucking problem? It’s just a series of steps on data. You could feed it white noise and it would vomit up more noise. And keep doing it as long as there’s power.
Intelligent?
If it was just autocomplete in the dismissive sense, white noise should make it derail into white noise. Instead it tries to make sense of it. Why? Because it learned strong language priors from us and it leans on that when the prompt is meaningless. It tries to make sense of it.
“Not human understanding” ≠ “no reasoning-like computation.”
Those aren’t the same thing.
People doing the "Fancy autocomplete” thing are doing the laziest possible move: not human, therefore nothing interesting happening. I disagree with that.
It doesn’t “understand,” like we do and it’s not infallible, but calling it “fancy autocomplete” is like calling a jet engine “fancy candle.”
Same category of thing, wildly different behavior.