logoalt Hacker News

akoyesterday at 7:32 AM3 repliesview on HN

An LLM by itself is not thinking, just remembering and autocompleting. But if you add a feedback loop where it can use tools, investigate external files or processes, and then autocomplete on the results, you get to see something that is (close to) thinking. I've seen claude code debug things by adding print statements in the source and reasoning on the output, and then determining next steps. This feedback loop is what sets AI tools apart, they can all use the same LLM, but the quality of the feedback loop makes the difference.


Replies

DebtDeflationyesterday at 1:03 PM

>But if you add a feedback loop where it can use tools, investigate external files or processes, and then autocomplete on the results, you get to see something that is (close to) thinking

It's still just information retrieval. You're just dividing it into internal information (the compressed representation of the training data) and external information (web search, API calls to systems, etc). There is a lot of hidden knowledge embedded in language and LLMs do a good job of teasing it out that resembles reasoning/thinking but really isn't.

show 2 replies
assimpleaspossiyesterday at 11:08 AM

>>you get to see something that is (close to) thinking.

Isn't that still "not thinking"?

show 1 reply
lossyalgoyesterday at 2:13 PM

Just ask it how many r's are in strawberry and you will realize there isn't a lot of reasoning going on here, it's just trickery on top of token generators.

show 2 replies