logoalt Hacker News

DonaldPShimodayesterday at 9:04 PM1 replyview on HN

> The reductionist, mechanical explanation of what AIs do is not the full picture

It is the full picture, to a first approximation. The statistical models involved are incredibly complex, and the mechanisms used to tend towards better outputs have improved drastically, but it is still fundamentally just statistics. I don't understand why you would try to argue otherwise. "Just statistics" is not a pejorative; if anything, I think it's incredibly impressive that we can do so much by using statistical models to predict things based on a context. But that means there are inherent limitations, and this is where my concern lies.

> AIs know more and can reason better than most humans

They do not "know" things; they do not "reason". They generate statistically likely outputs based on a huge and complex training set. The distinction is that it is still possible to get even modern cutting-edge models to contradict themselves or express "thoughts" in a way that a self-aware "reasoning" person would never do.

That said, yes, the statistical models have been tuned to generate output that imitates reasoning processes very realistically, and the training data includes copious quantities of "facts" that reflect human knowledge, and this has even led to neat and surprising outcomes. I'm not suggesting otherwise. I just fundamentally do not believe it is the same process that humans use for cognition, and I think the fact that LLMs generate text that appears to follow these processes is misleading to many people.

> The easiest way isn't with rhetorical tricks or sycophancy—it's arguing compellingly, solving difficult problems, and producing good code.

This depends greatly on what you think "easiest" means. The trade-off here is that we have invested a huge amount of compute to get here, which came with a significant cost vis a vis available resources.


Replies

vehemenzyesterday at 11:41 PM

[dead]