> all LLM output is based on likelihood of one word coming after the next word based on the prompt.
Right but it has to reason about what that next word should be. It has to model the problem and then consider ways to approach it.
No, it does not reason anything. LLM "reasoning" is just an illusion.
When an LLM is "reasoning" it's just feeding its own output back into itself and giving it another go.
No, it does not reason anything. LLM "reasoning" is just an illusion.
When an LLM is "reasoning" it's just feeding its own output back into itself and giving it another go.