logoalt Hacker News

ACCount37yesterday at 3:04 PM5 repliesview on HN

Are they? Not conscious?

If you list out every prominent theory of consciousness, you'd find that about a quarter rules out LLMs, a quarter tentatively rules LLMs in, and what remains is "uncertain about LLMs". And, of course, we don't know which theory of consciousness is correct - or if any of them is.

So, what is it that makes you so sure, oh so very certain, that LLMs just "feel" conscious but aren't?


Replies

dclowd9901yesterday at 3:35 PM

Because they don't _understand_ things. If I teach an LLM that 3+5 is 8, it doesn't "get" that 4+5 is 9 (leave aside the details here, as I'm explaining for effect). It needs to be taught that as well, and so on. We understand exactly everything that goes into how LLMs generate answers.

The line of consciousness, as we understand it, is understanding. And as far as what actually constitutes consciousness, we're not even close to understanding. That doesn't mean that LLMs are conscious. It just means we're so far from the real answers to what makes us, it's inconceivable to think we could replicate it.

show 3 replies
everdriveyesterday at 4:45 PM

This isn't meant to be an answer that would satisfy everyone, but in my opinion consciousness is a specific biological / evolutionary adaptation that has to do with managing status, relationships, and caring for young. It's about having an identity and an ego and building mental models of the egos / identities / etc of others.

I don't think there's any reason we couldn't in principle attach this sort of concept to an LLM, but it's not something we've actually done. (and no, prompting an LLM to act as if it has an identity does not count)

jacquesmyesterday at 3:09 PM

The fact that it's a box with a plug and a state that can be fully known. A conscious entity has a state that can not be fully known. Far smarter people than me have made this argument and in a much more eloquent way.

Turing aimed too low.

show 1 reply
grantcasyesterday at 4:54 PM

[dead]

afavouryesterday at 3:11 PM

> So, what is it that makes you so sure, oh so very certain, that LLMs just "feel" conscious but aren't?

Because we know what they actually are on the inside. You're talking as if they're an equivalent to the human brain, the functioning of which we're still figuring out. They're not. They're large language models. We know how they work. The way they work does not result in a functioning consciousness.

show 1 reply