logoalt Hacker News

Anthropic and the Department of War

15 pointsby paulpauperyesterday at 11:13 PM6 commentsview on HN

Comments

dirk94018today at 1:27 AM

The Pentagon seems to see this as a procurement issue, we bought a tool, don't tell us how to use it, and Anthropic seems concerned that the tool's nature is shaped by the constraints put on it, and we don't really understand this AI thing, and an unconstrained version could be a worse and more dangerous tool.

bigyabaiyesterday at 11:46 PM

> This whole incident, and what happens next, is all going straight into future training data. AIs will know what you are trying to do, even more so than all of the humans, and they will react accordingly. It will not be something that can be suppressed. You are not going to like the results.

Besides the fact that this is comically hyperbolic... isn't Mowshowitz wrong here? Training data and input data can be censored if the fed really wanted to, especially in the circumstances that they have the IP for Claude's foundation models.

> If you can’t do it cooperatively with Anthropic? Then find someone else.

This is way too little, way too late. The Pentagon has already offered their ultimatum, there's not any emotional appeal to make to them. The article's white-glove ethical and legal concerns are (unfortunately) not pragmatic, and it's idyllic vision of capitalism will not rescue Anthropic from the clutches of crony capitalism.

In the words of Dr. Breen, "You have chosen, or been chosen..."

show 3 replies