logoalt Hacker News

travisjungroth10/11/20241 replyview on HN

How people don’t see the irony of commenting “stochastic parrots” every time LLM reasoning failure comes up is beyond me.

There are ways to trick LLMs. There are also ways to trick people. If asking a tricky question and getting a wrong answer is enough to disprove reasoning, humans aren’t capable of reasoning, either.


Replies

tgv10/13/2024

It's all in the architecture They literally predict the next word by association with the input buffer. o1 tries to fix part of the problem by posing external control over it, which should improve logical reasoning, but if it can't spot the missing information in its association, it's doomed to repeat the same error. Yes, quite a few people are also pretty stupid, emotion-driven, association machines. It's commonly recognized, except perhaps by their parents.