Their values are about AI safety. Geopolitically they could care less. You might think its a bad take but at least they are consistent. AI safety people largely think that stuff like autonomous weapons are inevitable so they focus on trying to align them with humanity.
There's no AI safety. Either the AI does what the user asks and so the user can be prosecuted for the crime, or the AI does what IT wants and cannot be prosecuted for a crime. There's no safety, you just need to decide if you're on the side of alignment with humans or if you're on the side of the AIs.
I think you mean “couldn’t care less”. “Could care less” implies they care.
Consistency isn't a virtue. A guy who murders people at a consistent rate isn't better than a guy who murders people only on weekends.
>AI safety people largely think that stuff like autonomous weapons are inevitable so they focus on trying to align them with humanity.
Humanity includes the future victim of AI weapons.