I read an interesting book about consciousness recently: The Hidden Spring by Mark Solms.
Solms argues, I think convincingly, that consciousness fundamentally has to do with emotions and not cognition. Consciousness is not produced by the cortex but rather by the brainstem, where signals from all over the body converge (e.g. pain, hunger, itchiness, etc).
If that argument is true then a petri-dish of neurons is unlikely to be conscious, even it performs some analogue of visual processing.
The book makes other arguments that I found less convincing. For example that consciousness is "felt homeostasis" and that a fairly simple system (somewhat more complex than a thermometer) will be conscious, albeit minimally.
> Consciousness is not produced by the cortex but rather by the brainstem, where signals from all over the body converge (e.g. pain, hunger, itchiness, etc).
Which just begs the question of how pain or hunger is any different from a reward function, the very thing neural networks are based on. Or how it's even different from fungi growing towards food (pleasure), while avoiding salt (pain).
People having been saying for aeons that consciousness originates in the (mammalian) cortex and not in the brainstem. To justify killing all sorts of animals ;-)
The whole thing makes one thing extremely clear: people are very good at moving goalposts. We've blasted past the 'turing test' for all practical purposes, but we moved the definition of 'true intelligence'. Consciousness and intelligence have long seen as higly correlated or even the same thing. But now we have need of a separation between the two.
If we eventually (we're not there yet, I think) create a true intelligent AI it will probably be a long time before people will accept that creating an intelligent being probably means it should have 'rights' as well.
We're definitely not there yet, but at what point does turning off an AI become the same as killing a being? I think that's not being talked about enough. Sure LLMs are just prediction engines. But so are we. Our brains are prediction engines tuned by evolution to do the best possible prediction of the near future to maximize survival. We are definitely conscious. But a housefly, is that conscious? What makes the difference? it's hard to tell.
Otoh, an AI has no evolutionary reason to have the concept of fear/suffering so maybe it's more like the douglas adams creature that doesn't mind to be killed?