> only means they have specialized, deeper contextualized information
no, LLMs can have that contextualized information. understanding in a reasoning sense means classifying the thing and developing a deterministic algorithm to process it. If you don't have a deterministic algorithm to process it, it isn't understanding. LLMs learn to approximate, we do that too, but then we develop algorithms to process input and generate output using a predefined logical process.
A sorting algorithm is a good example, when you compare that with an LLM sorting a list. they both may have correct outcome, but the sorting algorithm "understood" the logic and will follow that specific logic and have consistent performance.
> understanding in a reasoning sense means classifying the thing and developing a deterministic algorithm to process it.
That's the learning-part I was talking about. Which is mainly supported by humans at the moment, which why I called it proto-intelligence.
> If you don't have a deterministic algorithm to process it, it isn't understanding.
Commercial AIs like ChatGPT do have the ability to call programs and integrate the result in their processing. Those AIs are not really just LLMs. The results are still rough and poor, but the concept is there and growing.