[flagged]
[flagged]
I can honestly understand both positions. The U.S. military must be able to use technology as it sees fit; it cannot allow private companies to control the use of military equipment. Anthropic must prevent a future where AIs make autonomous life and death decisions without humans in the loop. Living in that future is completely untenable.
What I don’t understand is why the two parties couldn’t reach agreement. Surely autonomous murderous robots is something U.S. government has interest in preventing.
I am fine with this. If you are a defense contractor, you are a defense contractor, and you follow the military needs that you government believes are necessary - or you stop being a defense contractor.
I wouldn't want a bullet manufacturer to hold back on my government based on their own internal sense of ethics (whether I agreed with it or not, it's not their place)
Everyone is getting wrapped around the axel here but this is about the big picture, not the specifics. A private company should not have the ability to dictate how its technology is used by the government. If they can’t agree to that, then don’t sell your technology to the government. Personally, I don’t want to be spied on by the government with it (I don’t think their tech does that) but I also don’t want Anthropic having operational control over a mission.
[flagged]