logoalt Hacker News

adamzwassermanlast Monday at 9:06 PM8 repliesview on HN

The article misses three critical points:

1. Conflates consciousness with "thinking" - LLMs may process information effectively without being conscious, but the article treats these as the same phenomenon

2. Ignores the cerebellum cases - We have documented cases of humans leading normal lives with little to no brain beyond a cerebellum, which contradicts simplistic "brain = deep learning" equivalences

3. Most damning: When you apply these exact same techniques to anything OTHER than language, the results are mediocre. Video generation still can't figure out basic physics (glass bouncing instead of shattering, ropes defying physics). Computer vision has been worked on since the 1960s - far longer than LLMs - yet it's nowhere near achieving what looks like "understanding."

The timeline is the smoking gun: vision had decades of head start, yet LLMs leapfrogged it in just a few years. That strongly suggests the "magic" is in language itself (which has been proven to be fractal and already heavily compressed/structured by human cognition) - NOT in the neural architecture. We're not teaching machines to think.

We're teaching them to navigate a pre-existing map that was already built.


Replies

kenjacksonlast Monday at 9:35 PM

"vision had decades of head start, yet LLMs leapfrogged it in just a few years."

From an evolutionary perspective though vision had millions of years head start over written language. Additionally, almost all animals have quite good vision mechanisms, but very few do any written communication. Behaviors that map to intelligence don't emerge concurrently. It may well be there are different forms of signals/sensors/mechanical skills that contribute to emergence of different intelligences.

It really feels more and more like we should recast AGI as Artificial Human Intelligence Likeness (AHIL).

show 2 replies
eloisantlast Monday at 9:13 PM

This is why I'm very skeptical about the "Nobel prize level" claims. To win a Nobel prize you would have to produce something completely new. LLM will probably be able to reach a Ph.D. level of understanding existing research, but bringing something new is a different matter.

show 2 replies
penteractlast Monday at 9:45 PM

There's a whole paragraph in the article which says basically the same as your point 3 ( "glass bouncing, instead of shattering, and ropes defying physics" is literally a quote from the article). I don't see how you can claim the article missed it.

show 1 reply
aucisson_masquelast Monday at 10:21 PM

> 2. Ignores the cerebellum cases - We have documented cases of humans leading normal lives with little to no brain beyond a cerebellum, which contradicts simplistic "brain = deep learning" equivalences

I went to look for it on Google but couldn't find much. Could you provide a link or something to learn more about ?

I found numerous cases of people living without cerebellum but I fail to see how it would justify your reasoning.

show 1 reply
KoolKat23last Monday at 11:06 PM

1. Consciousness itself is probably just an illusion, a phenomena/name of something that occurs when you bunch thinking together. Think of this objectively and base it on what we know of the brain. It literally is working off of what hardware we have, there's no magic.

2. That's just a well adapted neural network (I suspect more brain is left than you let on). Multimodal model making the most of its limited compute and whatever gpio it has.

3. Humans navigate a pre-existing map that is already built. We can't understand things in other dimensions and need to abstract this. We're mediocre at computation.

I know there's people that like to think humans should always be special.

show 3 replies
PaulDavisThe1stlast Monday at 9:21 PM

> Conflates consciousness with "thinking"

I don't see it. Got a quote that demonstrates this?

show 2 replies
nearbuylast Monday at 9:23 PM

Can you explain #2? What does the part of the brain that's primarily for balance and motor control tell us about deep learning?

show 1 reply
bjournelast Monday at 9:42 PM

> 1. Conflates consciousness with "thinking" - LLMs may process information effectively without being conscious, but the article treats these as the same phenomenon

There is NO WAY you can define "consciousness" in such a non-tautological, non-circular way that it includes all humans but excludes all LLMs.

show 3 replies