An objective and grounded ethical framework that applies to all agents should be a top priority.
Philosophy has been too damn anthropocentric, too hung up on consciousness and other speculative nerd snipe time wasters that without observation we can argue about endlessly.
And now here we are and the academy is sleeping on the job while software devs have to figure it all out.
I've moved 50% of my time to morals for machina that is grounded in physics, I'm testing it out with unsloth right now, so far I think it works, the machines have stopped killing kyle at least.
> morals for machina that is grounded in physics
That is fascinating. How could that work? It seems to be in conflict with the idea that values are inherently subjective. Would you start with the proposition that the laws of thermodynamics are "good" in some sense? Maybe hard code in a value judgement about order versus disorder?
That approach would seem to rule out machina morals that have preferential alignment with homo sapiens.
Is philosophy actually hung up on that? I assumed “what is consciousness” was a big question in philosophy in the same way that whether or not Schrödinger’s cat is alive or not is a big question in physics: which is to say, it is not a big question, it is just an evocative little example that outsiders get caught up on.
> An objective and grounded ethical framework that applies to all agents should be a top priority.
I mean leaving aside the problem of computability, representability, comparability of values, or the fact that agency exists in opposition (virus vs human, gazelle vs lion) and even a higher order framework to resolve those oppositions is a form of another agency in itself with its own implicit privileged vantage point, why does it sound to me that focusing on agency in itself is just another way of pushing protestant work ethic? What happens to non-teleological, non-productive existence for example?
The critique of anthropocentrism often risks smuggling in misanthropy whether intended or not; humans will still exist, their claims will count, and they cannot be reduced to mere agency - unless you are their line manager. Anyone who wants to shave that down has to present stronger arguments than centricity. In addition to proving that they can be anything other than anthropocentric - even if done through machines as their extensions - any person who claims to have access to the seat of objectivity sounds like a medieval templar shouting "deus vult" on their favorite proposition.
> An objective and grounded ethical framework that applies to all agents should be a top priority.
Sounds like a petrified civilization.
In the later Dune books, the protagonist's solution to this risk was to scatter humanity faster than any global (galactic) dictatorship could take hold. Maybe any consistent order should be considered bad?