logoalt Hacker News

deadbabelast Thursday at 7:11 PM0 repliesview on HN

It is far more accurate to say LLMs are collapsing or reducing response probabilities for a given input, than any kind of “thinking” or “reasoning”.