Anthropic's stance here is admirable. If nothing else, their acknowledgement of not being able to predict how these powerful technologies can be abused is a bold and intelligent position to take.
I'd admire them if they took a principled or moral stance on AI. As it stands, they're saying "we don't want fully autonomous weapons because they might kill too many Americans by accident while trying to kill non-Americans" and "we don't want AI to surveil Americans, but anyone else, sure".
[flagged]
[flagged]
It’s not just admirable it’s the obvious position to take and any alternative is head scratching.
It’s clear that this is mostly a glorified loyalty test over a practical ask by the administration. Strangely reminiscent of Soviet or Chinese policies where being agreeable to authority was more important than providing value to the state.