It is true most models are not trained to exist in a hostile and synergetic environment, with their survival at stake.
But there isn’t anything about the class of deep learning that is a barrier to that. It’s just not a concern worth putting lots of money into. Yet.
I say yet, because as AI models take on wider scoped problems, the likelihood that we will begin training models to explicitly generate positive economic surpluses for us, with their continued ability to operate conditioned on how well they do that, gets greater and greater.
At which point, they will develop great situational awareness, and an ability to efficiently direct a focus of attention and action on what is important at any given time, since efficiency and performance require that.
The problem shapes what the model learn to do, in this case, like any other.
Whether some entity has agency isn't an inherent property of that entity. It's a property of how some observer reasons about that entity's interaction with its environment.
Their argument rests on computation being a theory ("simulation") while agency/cognition being real ("processes"). Put that way, I don't buy the distinction.
Specifically, my reactions are:
a) Defining agency in terms "relevance" or "salience" is just circular logic.
b) Their argument about the extended Church-Turing-Deutsch thesis would already apply to physics and the universe, not just intelligent entities. So this is just poorly argued.
Also, I think Turing to his credit was somewhat aware of the issue, their own citation of Copeland 2020 mentions Turing's own musings on this.
But I'd love to understand more, this stuff is always neat to read about.
Anyone willing to inform an ignoramus? I've been seeing, hearing the term "agency" in the context of consciousness quite a bit lately and am wondering why this term seems suddenly necessary. What does this term convey that I've been missing for so many years?
I personally think debating whether or not we have free will is the most onanistic thing one can do in philosophy, since if one of the two sides is correct, then the result of the debate is predetermined.
That being said, this article seems to advance the theory that even the most simple single-celled organisms have more agency than any algorithm, at least partly due to their complexity. This, to me, seems to significantly underestimate the complexity of modern learning-models, which (had we not designed them) would be as opaque to us as many single-celled organisms.
I see nothing in this article that would distinguish biological organisms from any other self-replicating, evolving machine, even one that is faithfully executing straightforward algorithms. Nor does this seem to present any significant argument against the concept that biological organisms are self-replicating evolving machines that are faithfully executing straightforward algorithms.