logoalt Hacker News

A_D_E_P_Tyesterday at 2:00 PM4 repliesview on HN

I'd argue that it's not that complicated. That if something meets the below five criteria, we must accept that it is conscious:

(1) It maintains a persisting internal model of an environment, updated from ongoing input.

(2) It maintains a persisting internal model of its own body or vehicle as bounded and situated in that environment.

(3) It possesses a memory that binds past and present into a single temporally extended self-model.

(4) It uses these models with self-derived agency to generate and evaluate counterfactuals: Predictions of alternative futures under alternative actions. (i.e. a general predictive function.)

(5) It has control channels through which those evaluations shape its future trajectories in ways that are not trivially reducible to a fixed reflex table.

This would also indicate that Boltzmann Brains are not conscious -- so it's no surprise that we're not Boltzmann Brains, which would otherwise be very surprising -- and that P-Zombies are impossible by definition. I've been working on a book about this for the past three years...


Replies

jsennyesterday at 2:44 PM

If you remove the terms "self", "agency", and "trivially reducible", it seems to me that a classical robot/game AI planning algorithm, which no one thinks is conscious, matches these criteria.

How do you define these terms without begging the question?

show 1 reply
dllthomasyesterday at 2:47 PM

> so it's no surprise that we're not Boltzmann Brains

I think I agree you've excluded them from the definition, but I don't see why that has an impact on likelihood.

squibonpigyesterday at 5:17 PM

I don't think any of these need to lead to qualia for any obvious reason. It could be a p-zombie why not.

show 1 reply
turtleyachttoday at 1:18 AM

Is there a working title or some way to follow for updates?