logoalt Hacker News

vasilipupkin10/12/20240 repliesview on HN

I think it's an absurd question in some sense LLMs perform maximization of conditional probability of the next word being correct. Suppose they get to the point where they do that with 100% accuracy. How can you tell the difference between that and "Reasoning"? You can't. So then the question of whether they are "Reasoning" or not is religious, not quantitative.