logoalt Hacker News

deadbabe02/20/20250 repliesview on HN

It is far more accurate to say LLMs are collapsing or reducing response probabilities for a given input, than any kind of “thinking” or “reasoning”.