That is an excellent point! I feel like people have two modes of reasoning - a lazy mode where we assume we already know the problem, and an active mode where something prompts us to actually pay attention and actually reason about the problem. Perhaps LLMs only have the lazy mode?
I have found multiple definitions in literature of what you describe.
1. Fast thinking vs. slow thinking.
2. Intuitive thinking vs. symbolic thinking.
3. Interpolated thinking (in terms of pattern matching or curve fitting) vs. generalization.
4. Level 1 thinking vs. level 2 thinking. (In terms of OpenAIs definitions of levels of intelligence)
The definitions describe all the same thing.
Currently all of the LLMs are trained to use the "lazy" thinking approach. o1-preview is advertised as being the exception. It is trained or fine tuned with a countless number of reasoning patterns.
I prompted o1 with "analyze this problem word-by-word to ensure that you fully understand it. Make no assumptions." and it solved the "riddle" correctly.
https://chatgpt.com/share/6709473b-b22c-8012-a30d-42c8482cc6...