Yeah, that's compression. Although your later comments neglect the many years of physical experience that humans have as well as the billions of years of evolution.
And yes, by this definition, LLMs pass with flying colours.
I hate when people bring up this “billions of years of evolution” idea. It’s completely wrong and deluded in my opinion.
Firstly humans have not been evolving for “billions” of years.
Homo sapiens have been around for maybe 300’000 years, and the “homo” genus has been 2/3 million years. Before that we were chimps etc and that’s 6/7 million years ago.
If you want to look at the entire brain development, ie from mouse like creatures through to apes and then humans that’s 200M years.
If you want to think about generations it’s only 50/75M generations, ie “training loops”.
That’s really not very many.
Also the bigger point is this, for 99.9999% of that time we had no writing, or any kind of complex thinking required.
So our ability to reason about maths, writing, science etc is only in the last 2000-2500 years! Ie only roughly 200 or so generations.
Our brain was not “evolved” to do science, maths etc.
Most of evolution was us running around just killing stuff and eating and having sex. It’s only a tiny tiny amount of time that we’ve been working on maths, science, literature, philosophy.
So actually, these models have a massive, massive amount of training more than humans had to do roughly the same thing but using insane amounts of computing power and energy.
Our brains were evolved for a completely different world and environment and daily life that the life we lead now.
So yes, LLMs are good, but they have been exposed to more data and training time than any human could have unless we lived for 100000 years and still perform worse than we do in most problems!
I hate when people bring up this “billions of years of evolution” idea. It’s completely wrong and deluded in my opinion.
Firstly humans have not been evolving for “billions” of years.
Homo sapiens have been around for maybe 300’000 years, and the “homo” genus has been 2/3 million years. Before that we were chimps etc and that’s 6/7 million years ago.
If you want to look at the entire brain development, ie from mouse like creatures through to apes and then humans that’s 200M years.
If you want to think about generations it’s only 50/75M generations, ie “training loops”.
That’s really not very many.
Also the bigger point is this, for 99.9999% of that time we had no writing, or any kind of complex thinking required.
So our ability to reason about maths, writing, science etc is only in the last 2000-2500 years! Ie only roughly 200 or so generations.
Our brain was not “evolved” to do science, maths etc.
Most of evolution was us running around just killing stuff and eating and having sex. It’s only a tiny tiny amount of time that we’ve been working on maths, science, literature, philosophy.
So actually, these models have a massive, massive amount of training more than humans had to do roughly the same thing but using insane amounts of computing power and energy.
Our brains were evolved for a completely different world and environment and daily life that the life we lead now.
So yes, LLMs are good, but they have been exposed to more data and training time than any human could have unless we lived for 100000 years and still perform worse than we do in most problems!