logoalt Hacker News

dns_snekyesterday at 10:29 AM1 replyview on HN

That's a motte and bailey fallacy. Nobody said that they aren't useful, the argument is that they can't reason [1]. The world is full of useful tools that can't reason or think in any capacity.

[1] That does not mean that they can never produce texts which describes a valid reasoning process, it means that they can't do so reliably. Sometimes their output can be genius and other times you're left questioning if they even have the reasoning skills of a 1st grader.


Replies

chimprichyesterday at 12:09 PM

I don't agree that LLMs can't reason reliably. If you give them a simple reasoning question, they can generally make a decent attempt at coming up with a solution. Complete howlers are rare from cutting-edge models. (If you disagree, give an example!)

Humans sometimes make mistakes in reasoning, too; sometimes they come up with conclusions that leave me completely bewildered (like somehow reasoning that the Earth is flat).

I think we can all agree that humans are significantly better and more consistently good at reasoning than even the best LLM models, but the argument that LLMs cannot reliably reason doesn't seem to match the evidence.