Hear, hear!
I have long thought this, but not had as good way to put it as you did.
If you think about geniuses like Einstein and ramanujen, they understood things before they had the mathematical language to express them. LLMs are the opposite; they fail to understand things after untold effort, training data, and training.
So the question is, how intelligent are LLMs when you reduce their training data and training? Since they rapidly devolve into nonsense, the answer must be that they have no internal intelligence
Ever had the experience of helping someone who's chronically doing the wrong thing, to eventually find they had an incorrect assumption, an incorrect reasoning generating deterministic wrong answers? LLMs dont do that; they just lack understanding. They'll hallucinate unrelated things because they dont know what they're talking about - you may have also had this experience with someone :)
You might be interested in reading about the minimum description length (MDL) principle [1]. Despite all the dissenters to your argument, what your positing is quite similar to MDL. It's how you can fairly compare models (I did some research in this area for LLMs during my PhD).
Simply put, to compare models, you describe both the model and training data using a code (usual reported as number of bits). The trained model that represents the data within the fewest number of bits is the more powerful model.
This paper [2] from ICML 2021 shows a practical approach for attempting to estimate MDL for NLP models applied to text datasets.
> So the question is, how intelligent are LLMs when you reduce their training data and training? Since they rapidly devolve into nonsense, the answer must be that they have no internal intelligence
This would be the equivalent of removing all senses of a human from birth and expecting them to somehow learn things. They will not. Therefore humans are not intelligent?
> LLMs dont do that; they just lack understanding.
You have no idea what they are doing. Since they are smaller than the dataset, they must have learned an internal algorithm. This algorithm is drawing patterns from somewhere - those are its internal, incorrect assumptions. It does not operate in the same way that a human does, but it seems ridiculous to say that it lacks intelligence because of that.
It sounds like you've reached a conclusion, that LLMs cannot be intelligent because they have said really weird things before, and are trying to justify it in reverse. Sure, it may not have grasped that particular thing. But are you suggesting that you've never met a human that is feigning understanding in a particular topic say some really weird things akin to an LLM? I'm an educator, and I have heard the strangest things that I just cannot comprehend no matter how much I dig. It really feels like shifting goalposts. We need to do better than that.