logoalt Hacker News

stevenhuanglast Monday at 1:01 PM1 replyview on HN

You have an outmoded understanding of how LLMs work (flawed in ways that are "not even wrong"), a poor ontological understanding of what reasoning even is, and too certain that your answers to open questions are the right ones.


Replies

ranyumelast Monday at 1:11 PM

My understanding is based on first-hand experimentation trying to make LLMs work on the impossible task of tasteful simulation of an adventure game.