logoalt Hacker News

gf000last Monday at 6:31 PM4 repliesview on HN

Well, unless you believe in some spiritual, non-physical aspect of consciousness, we could probably agree that human intelligence is Turing-complete (with a slightly sloppy use of terms).

So any other Turing-complete model can emulate it, including a computer. We can even randomly generate Turing machines, as they are just data. Now imagine we are extremely lucky and happen to end up with a super-intelligent program which through the mediums it can communicate (it could be simply text-based but a 2D video with audio is no different for my perspective) can't be differentiated from a human being.

Would you consider it sentient?

Now replace the random generation with, say, a back propagation algorithm. If it's sufficiently large, don't you think it's indifferent from the former case - that is, novel qualities could emerge?

With that said, I don't think that current LLMs are anywhere close to this category, but I just don't think this your reasoning is sound.


Replies

DanHultonyesterday at 1:00 AM

> we could probably agree that human intelligence is Turing-complete (with a slightly sloppy use of terms). > So any other Turing-complete model can emulate it

You're going off the rails IMMEDIATELY in your logic.

Sure, one Turing-complete computer language can have its logic "emulated" by another, fine. But human intelligence is not a computer language -- you're mixing up the terms "Turing complete" and "Turing test".

It's like mixing up the terms "Strawberry jam" and "traffic jam" and then going on to talk about how cars taste on toast. It's nonsensical.

show 1 reply
almostherelast Monday at 8:32 PM

We used to say "if you put a million monkeys on typewriters you would eventually get shakespear" and no one would ever say that anymore, because now we can literally write shakespear with an LLM.

And the monkey strategy has been 100% dismissed as shit..

We know how to deploy monkeys on typewriters, but we don't know what they'll type.

We know how to deploy transformers to train and inference a model, but we don't know what they'll type.

We DON'T know how a thinking human (or animal) brain works..

Do you see the difference.

show 3 replies
myrmidonlast Monday at 6:50 PM

> Would you consider it sentient?

Absolutely.

If you simulated a human brain by the atom, would you think the resulting construct would NOT be? What would be missing?

I think consciousness is simply an emergent property of our nervous system, but in order to express itself "language" is obviously needed and thus requires lots of complexity (more than what we typically see in animals or computer systems until recently).

show 1 reply
prmphlast Monday at 6:55 PM

There are many aspects to this that people like yourself miss, but I think we need satisfactory answers to them (or at least rigorous explorations of them) before we can make headway in these sorts of discussion.

Imagine we assume that A.I. could be conscious. What would be the identity/scope of that consciousness. To understand what I'm driving at, let's make an analogy to humans. Our consciousness is scoped to our bodies. We see through sense organ, and our brain, which process these signals, is located in a specific point in space. But we still do not know how consciousness arises in the brain and is bound to the body.

If you equate computation of sufficient complexity to consciousness, then the question arises: what exactly about computation would prodcuce consciousness? If we perform the same computation on a different substrate, would that then be the same consciousness, or a copy of the original? If it would not be the same consciousness, then just what give consciousness its identity?

I believe you would find it ridiculous to say that just because we are performing the computation on this chip, therefore the identity of the resulting consciousness is scoped to this chip.

show 2 replies