logoalt Hacker News

tovejyesterday at 4:23 PM3 repliesview on HN

Because they don't. The chain-of-reasoning feature is really just a way to get the LLM to prompt more.

The fact that it generates these "thinking" steps does not mean it is using them for reasoning. It's most useful effect is making it seem to a human that there is a reasoning process.


Replies

andaiyesterday at 5:10 PM

Is this position axiomatic or falsifiable? What would it take to change your mind?

show 1 reply
seba_dos1yesterday at 4:39 PM

I love how generating strings like "let me check my notes" is effective at ending up with somewhat better end results - it pushes the weights towards outputting text that appears to be written by someone who did check their notes :D

show 1 reply
satvikpendemyesterday at 4:32 PM

How would you determine humans have reasoning then, in a way that LLMs do not?

show 1 reply