Well algorithms don't think. That's what LLM's are.
Your digital thermometer doesn't think either.
The question is more whether LLMs can accurately report their internal operations, not whether any of that counts as "thinking."
Simple algorithms can, eg, be designed to report whether they hit an exceptional case and activated a different set of operations than usual.
I was asking for a technical argument against that spurious use of the term.
The question is more whether LLMs can accurately report their internal operations, not whether any of that counts as "thinking."
Simple algorithms can, eg, be designed to report whether they hit an exceptional case and activated a different set of operations than usual.