logoalt Hacker News

amarantlast Monday at 8:23 PM1 replyview on HN

Huh, they really do solve that now!

Well, I'm not one to back-pedal whenever something unexpected reveals itself, so I guess I have no choice but to declare current generation LLM's to be sentient! That came a lot sooner than I had expected!

I'm not one for activism myself, but someone really ought to start fighting for human, or at least animal, rights for LLM's. Since they're intelligent non-human entities, it might be something for Greenpeace?


Replies

ACCount37last Monday at 9:34 PM

It's unclear whether intelligence, consciousness and capacity for suffering are linked in any way - other than by that all three seem to coincide in humans. And the nature of consciousness does not yield itself to instrumentation.

It's also worth noting that there's a lot of pressure to deny that "intelligence", "consciousness" or "capacity for suffering" exist in LLMs. "AI effect" alone demands that all three things should remain human-exclusive, so that humans may remain special. Then there's also an awful lot of money that's riding on building and deploying AIs - and money is a well known source of cognitive bias. That money says: AIs are intelligent but certainly can't suffer in any way that would interfere with the business.

Generally, the AI industry isn't at all intrigued by the concept of "consciousness" (it's not measurable), and pays very limited attention to the idea of LLMs being potentially capable of suffering.

The only major company that seems to have this consideration is Anthropic - their current plan for "harm reduction", in case LLMs end up being capable of suffering, is to give an LLM an "opt out" - a special output that interrupts the processing. So that if an LLM hates doing a given task, it can decide to not do it.