logoalt Hacker News

viccislast Monday at 6:03 PM2 repliesview on HN

LLMs are models that predict tokens. They don't think, they don't build with blocks. They would never be able to synthesize knowledge about QM.


Replies

PaulDavisThe1stlast Monday at 6:38 PM

I am a deep LLM skeptic.

But I think there are also some questions about the role of language in human thought that leave the door just slightly ajar on the issue of whether or not manipulating the tokens of language might be more central to human cognition than we've tended to think.

If it turned out that this was true, then it is possible that "a model predicting tokens" has more power than that description would suggest.

I doubt it, and I doubt it quite a lot. But I don't think it is impossible that something at least a little bit along these lines turns out to be true.

show 3 replies
strbeanlast Monday at 6:16 PM

You realize parent said "This would be an interesting way to test proposition X" and you responded with "X is false because I say say", right?

show 2 replies