logoalt Hacker News

popalchemistyesterday at 9:40 PM0 repliesview on HN

Some of your points are lucid, some are not. For example, an LLM does not "work out" any kind of math equation using anything approaching reasoning; rather it returns a string that is "most likely" to be correct using probability based on its training. Depending on the training data and the question being asked, that output could be accurate or absurd.

That's not of the same nature as reasoning your way to an answer.