logoalt Hacker News

ACCount37last Monday at 9:34 PM0 repliesview on HN

It's unclear whether intelligence, consciousness and capacity for suffering are linked in any way - other than by that all three seem to coincide in humans. And the nature of consciousness does not yield itself to instrumentation.

It's also worth noting that there's a lot of pressure to deny that "intelligence", "consciousness" or "capacity for suffering" exist in LLMs. "AI effect" alone demands that all three things should remain human-exclusive, so that humans may remain special. Then there's also an awful lot of money that's riding on building and deploying AIs - and money is a well known source of cognitive bias. That money says: AIs are intelligent but certainly can't suffer in any way that would interfere with the business.

Generally, the AI industry isn't at all intrigued by the concept of "consciousness" (it's not measurable), and pays very limited attention to the idea of LLMs being potentially capable of suffering.

The only major company that seems to have this consideration is Anthropic - their current plan for "harm reduction", in case LLMs end up being capable of suffering, is to give an LLM an "opt out" - a special output that interrupts the processing. So that if an LLM hates doing a given task, it can decide to not do it.