The only two things anthropic ask is that AI cannot be used for:
- domestic mass surveillance,
- autonomous kill decision.
That's it. The reason for the first one is clear: it violate the spirit of the fourth amendment at least.
The reason for the second is that if a kill decision is taken, let's say by an ICE agent who just got told 'im not mad at you' or something similar that would surely enrage him, he is responsible in front of the law. If it's an autonomous drone that shoots on political opponent/protestors, no one is responsible.
I will add that Google and anthropic made their AI play wargames. 93% of the time, their models escalate to the nuclear option.