logoalt Hacker News

JohnKemeny05/15/20250 repliesview on HN

We shouldn’t anthropomorphize LLMs—they don’t “struggle.” A better framing is: why is the most likely next token, given the prior context, one that reinforces the earlier wrong turn?