logoalt Hacker News

adamzwassermanyesterday at 2:44 AM1 replyview on HN

1. I don't need to define consciousness to point out that you're using an unproven claim ('consciousness is probably an illusion') as the foundation of your argument. That's circular reasoning.

2. 'It's a spectrum' doesn't address the point. You claimed LLMs approximate brain function because they have similar architecture. Massive structural variation in biological brains producing similar function undermines that claim.

3. You're still missing it. Humans use language to describe discoveries made through physical interaction. LLMs can only recombine those descriptions. They can't discover that a description is wrong by stubbing their toe or running an experiment. Language is downstream of physical discovery, not a substitute for it


Replies

KoolKat23yesterday at 7:47 AM

1. You do. You probably have a different version of that and are saying I'm wrong merely for not holding your definition.

2. That directly addresses your point. In abstract it shows they're basically no different to multimodal models, train with different data types and it still works, perhaps even better. They train LLMs with images, videos, sound, and nowadays even robot sensor feedback, with no fundamental changes to the architecture see Gemini 2.5.

3. That's merely an additional input point, give it sensors or have a human relay that data. Your toe is relaying it's sensor information to your brain.