I watched this entire video just so that I could have an informed opinion. First off, this feels like two very separate talks:
The first part is a decent breakdown of how artificial neural networks process information and store relational data about that information in a vast matrix of numerical weights that can later be used to perform some task. In the case of computer vision, those weights can be used to recognize objects in a picture or video streams, such as whether something is a hotdog or not.
As a side note, if you look up Hinton’s 2024 Nobel Peace Prize in Physics, you’ll see that he won based on his work on the foundations of these neural networks and specifically, their training. He’s definitely an expert on the nuts and bolts about how neural networks work and how to train them.
He then goes into linguistics and how language can be encoded in these neural networks, which is how large language models (LLMs) work… by breaking down words and phrases into tokens and then using the weights in these neural networks to encode how these words relate to each other. These connections are later used to generate other text output related to the text that is used as input. So far so good.
At that point he points out these foundational building blocks have been used to lead to where we are now, at least in a very general sense. He then has what I consider the pivotal slide of the entire talk, labeled Large Language Models, which you can see at 17:22. In particular he has two questions at the bottom of the slide that are most relevant:
Are they genuinely intelligent?
Or are they just a form of glorified auto-complete that uses statistical regularities to pastiche together pieces of text that were created by other people?
The problem is: he never answers these questions. He immediately moves on to his own theory about how language works using an analogy to LEGO bricks, and then completely disregards the work of linguists in understanding language, because what do those idiots know?
At this point he brings up The long term existential threat and I would argue the rest of this talk is now science fiction, because it presupposes that understanding the relationship between words is all that is necessary for AI to become superintelligent and therefore a threat to all of us.
Which goes back to the original problem in my opinion: LLMs are text generation machines. They use neural networks encoded as a matrix of weights that can be used to predict long strings of text based on other text. That’s it. You input some text, and it outputs other text based on that original text.
We know that different parts of the brain have different responsibilities. Some parts are used to generate language, other parts store memories, still other parts are used to make our bodies move or regulate autonomous processes like our heartbeat and blood pressure. Still other bits are used to process images from our eyes and other parts reason about spacial awareness, while others engage in emotional regulation and processing.
Saying that having a model for language means that we’ve built an artificial brain is like saying that because I built a round shape called a wheel means that I invented the modern automobile. It’s a small part of a larger whole, and although neural networks can be used to solve some very difficult problems, they’re only a specific tool that can be used to solve very specific tasks.
Although Geoffrey Hinton is an incredibly smart man who mathematically understands neural networks far better than I ever will, extrapolating that knowledge out to believing that a large language model has any kind of awareness or actual intelligence is absurd. It’s the underpants gnome economic theory, but instead of:
Collect underpants
?
Profit!
It looks more like:
Use neural network training to construct large language models.
?
Artificial general intelligence!
If LLMs were true artificial intelligence, then they would be learning at an increasing rate as we give them more capacity, leading to the singularity as their intelligence reaches hockey stick exponential growth. Instead, we’ve been throwing a growing amount resources at these LLMs for increasingly smaller returns. We’ve thrown a few extra tricks into the mix, like “reasoning”, but beyond that, I believe it’s clear that we’re headed towards a local maximum that is far enough away from intelligence that would be truly useful (and represent an actual existential threat), but in actuality only resembles what a human can output well enough to fool human decision makers into trusting them to solve problems that they are incapable of solving.
Interesting talk but the number of times he completely dismisses the entire field of linguistics kind of makes me think he’s being disingenuous about his familiarity with it.
For one, I think he is dismissing holotes, the concept of “wholeness.” That when you cut something apart to it’s individual parts, you lose something about the bigger picture. This deconstruction of language misses the larger picture of the human body as a whole, and how every part of us, from our assemblage of organs down to our DNA, impact how we interact with and understand the world. He may have a great definition of understanding but it still sounds (to me) like it’s potentially missing aspects of human/animal biologically based understanding.
For example, I have cancer, and about six months before I was diagnosed, I had begun to get more chronically depressed than usual. I felt hopeless and I didn’t know why. Surprisingly, that’s actually a symptom of my cancer. What understanding did I have that changed how I felt inside and how I understood the things around me? Suddenly I felt different about words and ideas, but nothing had changed externally, something had change internally. The connections in my neural network had adjusted, the feelings and associations with words and ideas was different, but I hadn’t done anything to make that adjustment. No learning or understanding had happened. I had a mutation in my DNA that made that adjustment for me.
Further, I think he’s deeply misunderstanding (possibly intentionally?) what linguists like Chomsky are saying when they say humans are born with language. They mean that we are born with a genetic blueprint to understand language. Just like animals are born with a genetic blueprint to do things they were never trained to do. Many animals are born and almost immediately stand up to walk. This is the same principle. There are innate biologically ingrained understandings that help us along the path to understanding. It does not mean we are born understanding language as much as we are born with the building blocks of understanding the physical world in which we exist.
Anyway, interesting talk, but I immediately am skeptical of anyone who wholly dismisses an entire field of thought so casually.
For what it’s worth, I didn’t downvote you and I’m sorry people are doing so.
People really do not like seeing opposing viewpoints, eh? There’s disagreeing, and then there’s downvoting to oblivion without even engaging in a discussion, haha.
Even if they’re probably right, in such murky uncertain waters where we’re not experts, one should have at least a little open mind, or live and let live.
It’s like talking with someone who thinks the Earth is flat. There isn’t anything to discuss. They’re objectively wrong.
Humans like to anthropomorphize everything. It’s why you can see a face on a car’s front grille. LLMs are ultra advanced pattern matching algorithms. They do not think or reason or have any kind of opinion or sentience, yet they are being utilized as if they do. Let’s see how it works out for the world, I guess.
I think so too, but I am really curious what will happen when we give them “bodies” with sensors so they can explore the world and make individual “experiences”. I could imagine they would act much more human after a while and might even develop some kind of sentience.
Of course they would also need some kind of memory and self-actualization processes.
Interaction with the physical world isn’t really required for us to evaluate how they deal with ‘experiences’. They have in principle access to all sorts of interesting experiences in the online data. Some models have been enabled to fetch internet data and add them to the prompt to help synthesize an answer.
One key thing is they don’t bother until direction tells them. They don’t have any desire they just have “generate search query from prompt, execute search query and fetch results, consider the combination of the original prompt and the results to be the context for generating more content and return to user”.
LLM is not a scheme that credibly implies that more LLM == sapient existance. Such a concept may come, but it will be something different than LLM. LLM just looks crazily like dealing with people.
I think there’s two basic mistakes that you made. First, you think that we aren’t experts, but it’s definitely true that some of us have studied these topics for years in college or graduate school, and surely many other people are well read on the subject. Obviously you can’t easily confirm our backgrounds, but we exist. Second, people who are somewhat aware of the topic might realize that it’s not particularly productive to engage in discussion on it here because there’s too much background information that’s missing. It’s often the case that experts don’t try to discuss things because it’s the wrong venue, not because they feel superior.
That’s because they aren’t “aware” of anything.
This Nobel Prize winner and subject matter expert takes the opposite view
https://youtube.com/watch?v=IkdziSLYzHw&t=2730s
I watched this entire video just so that I could have an informed opinion. First off, this feels like two very separate talks:
The first part is a decent breakdown of how artificial neural networks process information and store relational data about that information in a vast matrix of numerical weights that can later be used to perform some task. In the case of computer vision, those weights can be used to recognize objects in a picture or video streams, such as whether something is a hotdog or not.
As a side note, if you look up Hinton’s 2024 Nobel Peace Prize in Physics, you’ll see that he won based on his work on the foundations of these neural networks and specifically, their training. He’s definitely an expert on the nuts and bolts about how neural networks work and how to train them.
He then goes into linguistics and how language can be encoded in these neural networks, which is how large language models (LLMs) work… by breaking down words and phrases into tokens and then using the weights in these neural networks to encode how these words relate to each other. These connections are later used to generate other text output related to the text that is used as input. So far so good.
At that point he points out these foundational building blocks have been used to lead to where we are now, at least in a very general sense. He then has what I consider the pivotal slide of the entire talk, labeled Large Language Models, which you can see at 17:22. In particular he has two questions at the bottom of the slide that are most relevant:
The problem is: he never answers these questions. He immediately moves on to his own theory about how language works using an analogy to LEGO bricks, and then completely disregards the work of linguists in understanding language, because what do those idiots know?
At this point he brings up The long term existential threat and I would argue the rest of this talk is now science fiction, because it presupposes that understanding the relationship between words is all that is necessary for AI to become superintelligent and therefore a threat to all of us.
Which goes back to the original problem in my opinion: LLMs are text generation machines. They use neural networks encoded as a matrix of weights that can be used to predict long strings of text based on other text. That’s it. You input some text, and it outputs other text based on that original text.
We know that different parts of the brain have different responsibilities. Some parts are used to generate language, other parts store memories, still other parts are used to make our bodies move or regulate autonomous processes like our heartbeat and blood pressure. Still other bits are used to process images from our eyes and other parts reason about spacial awareness, while others engage in emotional regulation and processing.
Saying that having a model for language means that we’ve built an artificial brain is like saying that because I built a round shape called a wheel means that I invented the modern automobile. It’s a small part of a larger whole, and although neural networks can be used to solve some very difficult problems, they’re only a specific tool that can be used to solve very specific tasks.
Although Geoffrey Hinton is an incredibly smart man who mathematically understands neural networks far better than I ever will, extrapolating that knowledge out to believing that a large language model has any kind of awareness or actual intelligence is absurd. It’s the underpants gnome economic theory, but instead of:
It looks more like:
If LLMs were true artificial intelligence, then they would be learning at an increasing rate as we give them more capacity, leading to the singularity as their intelligence reaches hockey stick exponential growth. Instead, we’ve been throwing a growing amount resources at these LLMs for increasingly smaller returns. We’ve thrown a few extra tricks into the mix, like “reasoning”, but beyond that, I believe it’s clear that we’re headed towards a local maximum that is far enough away from intelligence that would be truly useful (and represent an actual existential threat), but in actuality only resembles what a human can output well enough to fool human decision makers into trusting them to solve problems that they are incapable of solving.
Interesting talk but the number of times he completely dismisses the entire field of linguistics kind of makes me think he’s being disingenuous about his familiarity with it.
For one, I think he is dismissing holotes, the concept of “wholeness.” That when you cut something apart to it’s individual parts, you lose something about the bigger picture. This deconstruction of language misses the larger picture of the human body as a whole, and how every part of us, from our assemblage of organs down to our DNA, impact how we interact with and understand the world. He may have a great definition of understanding but it still sounds (to me) like it’s potentially missing aspects of human/animal biologically based understanding.
For example, I have cancer, and about six months before I was diagnosed, I had begun to get more chronically depressed than usual. I felt hopeless and I didn’t know why. Surprisingly, that’s actually a symptom of my cancer. What understanding did I have that changed how I felt inside and how I understood the things around me? Suddenly I felt different about words and ideas, but nothing had changed externally, something had change internally. The connections in my neural network had adjusted, the feelings and associations with words and ideas was different, but I hadn’t done anything to make that adjustment. No learning or understanding had happened. I had a mutation in my DNA that made that adjustment for me.
Further, I think he’s deeply misunderstanding (possibly intentionally?) what linguists like Chomsky are saying when they say humans are born with language. They mean that we are born with a genetic blueprint to understand language. Just like animals are born with a genetic blueprint to do things they were never trained to do. Many animals are born and almost immediately stand up to walk. This is the same principle. There are innate biologically ingrained understandings that help us along the path to understanding. It does not mean we are born understanding language as much as we are born with the building blocks of understanding the physical world in which we exist.
Anyway, interesting talk, but I immediately am skeptical of anyone who wholly dismisses an entire field of thought so casually.
For what it’s worth, I didn’t downvote you and I’m sorry people are doing so.
People really do not like seeing opposing viewpoints, eh? There’s disagreeing, and then there’s downvoting to oblivion without even engaging in a discussion, haha.
Even if they’re probably right, in such murky uncertain waters where we’re not experts, one should have at least a little open mind, or live and let live.
It’s like talking with someone who thinks the Earth is flat. There isn’t anything to discuss. They’re objectively wrong.
Humans like to anthropomorphize everything. It’s why you can see a face on a car’s front grille. LLMs are ultra advanced pattern matching algorithms. They do not think or reason or have any kind of opinion or sentience, yet they are being utilized as if they do. Let’s see how it works out for the world, I guess.
I think so too, but I am really curious what will happen when we give them “bodies” with sensors so they can explore the world and make individual “experiences”. I could imagine they would act much more human after a while and might even develop some kind of sentience.
Of course they would also need some kind of memory and self-actualization processes.
Interaction with the physical world isn’t really required for us to evaluate how they deal with ‘experiences’. They have in principle access to all sorts of interesting experiences in the online data. Some models have been enabled to fetch internet data and add them to the prompt to help synthesize an answer.
One key thing is they don’t bother until direction tells them. They don’t have any desire they just have “generate search query from prompt, execute search query and fetch results, consider the combination of the original prompt and the results to be the context for generating more content and return to user”.
LLM is not a scheme that credibly implies that more LLM == sapient existance. Such a concept may come, but it will be something different than LLM. LLM just looks crazily like dealing with people.
I think there’s two basic mistakes that you made. First, you think that we aren’t experts, but it’s definitely true that some of us have studied these topics for years in college or graduate school, and surely many other people are well read on the subject. Obviously you can’t easily confirm our backgrounds, but we exist. Second, people who are somewhat aware of the topic might realize that it’s not particularly productive to engage in discussion on it here because there’s too much background information that’s missing. It’s often the case that experts don’t try to discuss things because it’s the wrong venue, not because they feel superior.