You’re still anthropomorphizing what these models are doing.
> You’re still anthropomorphizing what these models are doing.
Didn't we build them to imitate humans? They're anthropomorphic by definition.
It's just shorthand.
Would you prefer if we started using words like aiThinking and aiReasoning to differentiate? Or is it reasonable to figure it out from context?
I've come to the same conclusion. "AI" was just the marketing term for a large language model in the form of a chatbot, which harkened to sci-fi characters like Data or GLaDOS. It can look impressive, it can often give correct answers, but it's just a bunch of next word predictions stacked on top of each other. The word "AI" has deviated so much from this older meaning that a second acronym, "AGI", had to be created to represent what "AI" once did.
The new "reasoning" or "chain of thought" AIs are similarly just a bunch of conventional LLM inputs and outputs stacked on top of each other. I agree with the GP that it feels a bit magical at first, but the opportunity to run a DeepSeek distillation on my PC - where each step of the process is visible - removed quite a bit of the magic behind the curtain.