logoalt Hacker News

dclowd9901yesterday at 3:35 PM3 repliesview on HN

Because they don't _understand_ things. If I teach an LLM that 3+5 is 8, it doesn't "get" that 4+5 is 9 (leave aside the details here, as I'm explaining for effect). It needs to be taught that as well, and so on. We understand exactly everything that goes into how LLMs generate answers.

The line of consciousness, as we understand it, is understanding. And as far as what actually constitutes consciousness, we're not even close to understanding. That doesn't mean that LLMs are conscious. It just means we're so far from the real answers to what makes us, it's inconceivable to think we could replicate it.


Replies

munksbeeryesterday at 10:36 PM

> The line of consciousness, as we understand it, is understanding.

Is it? I'm no expert, by any stretch, but where does this theory come from?

I don't think anyone knows what consciousness is, or why we appear to have it, or even if we do have it. I don't even know that you're conscious. I could be the only conscious being in the universe and the rest of you are just zombies, with all the right external outputs to fool me, but no actual consciousness.

ACCount37yesterday at 4:03 PM

Leave aside "the details" like you being obviously, provably wrong?

We've known for a long while that even basic toy-scale AIs can "grok" and attain perfect generalization of addition that extends to unseen samples.

Humans generalize faster than most AIs, but AIs generalize too.

SpicyLemonZestyesterday at 3:39 PM

> Because they don't _understand_ things. If I teach an LLM that 3+5 is 8, it doesn't "get" that 4+5 is 9 (leave aside the details here, as I'm explaining for effect). It needs to be taught that as well, and so on. We understand exactly everything that goes into how LLMs generate answers.

What you're saying just isn't true, even directionally. Deployed LLMs routinely generalize outside of their training set to apply patterns they learned within the training set. How else, for example, could LLMs be capable of summarizing new text they didn't see in training?