logoalt Hacker News

fc417fc802yesterday at 10:31 PM3 repliesview on HN

Our brains work differently, yes. What evidence do you have that our brains are not functionally equivalent to a series of weights being used to predict the next token?

I'm not claiming that to be the case, merely pointing out that you don't appear to have a reasonable claim to the contrary.

> not even including the possibility that we have a soul or any other spiritual substrait.

If we're going to veer off into mysticism then the LLM discussion is also going to get a lot weirder. Perhaps we ought to stick to a materialist scientific approach?


Replies

nothinkjustaiyesterday at 10:48 PM

You are setting the bar in a way that makes “functional equivalence” unfalsifiable.

If by “functionally equivalent” you mean “can produce similar linguistic outputs in some domains,” then sure we’re already there in some narrow cases. But that’s a very thin slice of what brains do, and thus not functionally equivalent at all.

There are a few non-mystical, testable differences that matter:

- Online learning vs. frozen inference: brains update continuously from tiny amounts of data, LLMs do not

- Grounding: human cognition is tied to perception, action, and feedback from the world. LLMs operate over symbol sequences divorced from direct experience.

- Memory: humans have persistent, multi-scale memory (episodic, procedural, etc.) that integrates over a lifetime. LLM “memory” is either weights (static) or context (ephemeral).

- Agency: brains are part of systems that generate their own goals and act on the world. LLMs optimize a fixed objective (next-token prediction) and don’t have endogenous drives.

show 1 reply
CPLXyesterday at 10:34 PM

What evidence do you have that a sausage is not functionally equivalent to a cucumber?

show 3 replies