logoalt Hacker News

Statement from Dario Amodei on our discussions with the Department of War

1794 pointsby qwertoxyesterday at 10:42 PM939 commentsview on HN

Comments

lebovictoday at 12:21 AM

I used to work at Anthropic, and I wrote a comment on a thread earlier this week about the RSP update [1]. I's enheartening to see that leaders at Anthropic are willing to risk losing their seat at the table to be guided by values.

Something I don't think is well understood on HN is how driven by ideals many folks at Anthropic are, even if the company is pragmatic about achieving their goals. I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values, and b) they think is a net negative in the long term. (Many others, too, they're just well-known.)

That doesn't mean that I always agree with their decisions, and it doesn't mean that Anthropic is a perfect company. Many groups that are driven by ideals have still committed horrible acts.

But I do think that most people who are making the important decisions at Anthropic are well-intentioned, driven by values, and are genuinely motivated by trying to make the transition to powerful AI to go well.

[1]: https://news.ycombinator.com/item?id=47145963#47149908

show 25 replies
qaidyesterday at 11:20 PM

I was reading halfway thru and one line struck a nerve with me:

> But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.

So not today, but the door is open for this after AI systems have gathered enough "training data"?

Then I re-read the previous paragraph and realized it's specifically only criticizing

> AI-driven domestic mass surveillance

And neither denounces partially autonomous mass surveillance nor closes the door on AI-driven foreign mass surveillance

A real shame. I thought "Anthropic" was about being concerned about humans, and not "My people" vs. "Your people." But I suppose I should have expected all of this from a public statement about discussions with the Department of War

show 23 replies
jjcmyesterday at 11:55 PM

This is the strongest statement in the post:

> They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.

This contradictory messaging puts to rest any doubt that this is a strong arm by the governemnt to allow any use. I really like Anthropic's approach here, which is to in turn state that they're happy to help the Governemnt move off of Anthropic. It's a messaging ploy for sure, but it puts the ball in the current administration's court.

show 8 replies
tabbottyesterday at 11:43 PM

An organization character really shows through when their values conflict with their self-interest.

It's inspiring to see that Anthropic is capable of taking a principled stand, despite having raised a fortune in venture capital.

I don't think a lot of companies would have made this choice. I wish them the very best of luck in weathering the consequences of their courage.

show 1 reply
helaobanyesterday at 11:34 PM

All of these problems are downstream of the Congress having thoroughly abdicated its powers to the executive.

The military should be reigned in at the legislative level, by constraining what it can and cannot do under law. Popular action is the only way to make that happen. Energy directed anywhere else is a waste.

Private corporations should never be allowed to dictate how the military acts. Such a thought would be unbearable if it weren't laughably impossible. The technology can just be requisitioned, there is nothing a corporation or a private individual can do about that. Or the models could be developed internally, after having requisitioned the data centers.

To watch CEOs of private corporations being mythologized for something that a) they should never be able to do and b) are incapable of doing is a testament to how distorted our picture of reality has become.

show 5 replies
flumpcakesyesterday at 11:04 PM

This is such a depressing read. What is becoming of the USA? Let's hope sanity prevails and the next election cycle can bring in some competent non-grievance based leadership.

show 8 replies
eisfressertoday at 6:47 AM

> mass __domestic__ surveillance is incompatible with democratic values

But mass surveillance of Australians or Danes is alligned with democratic values as long as it's the Americans doing it?

I don't think the moral high ground Anthropic is taking here is high enough.

wohoeftoday at 8:50 AM

Anthropic's two demands are: 1. No domestic mass surveillance 2. No autonomous killing

I'm wondering if 2. was added simply to justify them not cooperating. It's a lot easier to defend 1. + 2. than just 1. If in the future they do decide to cooperate with the DoW, they could settle on doing only mass surveillance, but no autonomous killings. This would be presented as a victory for both parties since they both partially get what they wanted, even though autonomous killing was never really on the table for either of them. Which is a big if given the current administration.

nkorenyesterday at 11:18 PM

This makes me a very happy Claude Max subscriber.

Finally, someone of consequence not kissing the ring. I hope this gives others courage to do the same.

show 5 replies
alangibsonyesterday at 10:54 PM

It's not named the Department of War because Congress didn't rename it.

Other than that, good on ya.

show 9 replies
kace91yesterday at 11:17 PM

As someone who is potentially their client and not domestic, really reassuring that they have no concerns with mass spying peaceful citizens of my particular corner of the world.

show 3 replies
bambaxtoday at 6:28 AM

> These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.

Nicely put. In other words: Department of Morons.

QuiEgotoday at 3:57 AM

I'd be amused beyond all reason if we saw this chain of events:

- Anthropic says "no"

- DoD says "ok you're a supply chain risk" (meaning many companies with gov't contracts can no longer use them)

- A bunch of tech companies say "you know what? We think we'd lose more money from falling behind on AI than we'd lose from not having your contracts."

Bonus points if its some of the hyperscalers like AWS.

Hilarity ensues as they blow up (pun intended) their whole supply chain and rapidly backtrack.

show 1 reply
GreenJacketBoytoday at 7:48 AM

"fully autonomous weapons" from a private company; "Department of War". Hard to believe I'm not reading science fiction.

exabrialtoday at 3:01 AM

Brother in law did some "time with the brass" as he calls it. His take was that the DOD, er DOW would, as an example, never acquire a fighter jet that "wouldn't target and kill a civilian airliner", citing that on 9/11 we literally almost did that. The DOW is acquiring instruments of war, which is probably unconformable for a lot of people to consider.

His conclusion was that the limits of use ought to be contractual, not baked into the LLM, which is where the fallout seems to be. He noted that the Pentagon has agreed to terms like that in the past.

To me, that seems like reasonable compromise for both parties, but both sides are so far entrenched now we're unlikely to see a compromise.

show 2 replies
KronisLVtoday at 8:21 AM

Feels like they’re leaving a lot of money on the table and inviting existential peril by not bending the knee to the current Great Leader.

It does feel like what anyone sane should do (especially given the contradictions being pointed out and the fact that the technology isn’t even there yet) but when you metaphorically have Landa at your door asking for milk, I’m not sure it’s smart.

I feel like what most corpos would do, would be to just roll along with it.

atleastoptimalyesterday at 11:56 PM

I was concerned originally when I heard that Anthropic, who often professed to being the "good guy" AI company who would always prioritize human welfare, opted to sell priority access to their models to the Pentagon in the first place.

The devil's advocate position in their favor I imagine would be that they believe some AI lab would inevitably be the one to serve the military industrial complex, and overall it's better that the one with the most inflexible moral code be the one to do it.

show 4 replies
czierleyntoday at 8:26 AM

Being from Europe I do not like the remark that he only objects to DOMESTIC mass surveillance.

zb1plustoday at 2:00 AM

It would be hilarious if the Europeans got everyone visas and gave some kind of tax benefit to Anthropic and poached the entire company.

show 1 reply
danbrooksyesterday at 11:01 PM

Props to Dario and Anthropic for taking a moral stand. A rarity in tech these days.

show 7 replies
rustyhancocktoday at 8:08 AM

Surely this is a powerful signal to divest from Anthropic if you don't live in the US? There's a lot of here's what we support you do to foreigners but no way can you do it in the US?

I can never tell how much of this is puffery from Anthropic.

I do think they like to overstate their power.

maelitotoday at 8:35 AM

> to defeat our autocratic adversaries.

I'm not sure who's targeted here. The folks that want to invade the EU ?

show 1 reply
placebotoday at 7:05 AM

Grok's thoughts on the matter:

"In an ideal world, I'd want xAI to emulate the maturity Anthropic showed here: affirm willingness to help defend democracies (including via classified/intel/defense tools), sacrifice short-term revenue if needed to block adversarial access, but stand firm on refusing to enable the most civilizationally corrosive misuses when the tech simply isn't ready or the societal cost is too high. Saying "no" to powerful customers—even the DoD—when the ask undermines core principles is hard, but it's the kind of spine that builds long-term trust and credibility."

It also acknowledged that this is not what is happening...

freakynittoday at 1:59 AM

Welp, I never thought "Person of Interest" show coming to life anytime soon, but, here we are. In case you haven't watched the show, it's time to give it a go. Bare with season 2 though, since things really start to escalate from season 3 onwards. Season 1 is a must though.

show 1 reply
Metacelsusyesterday at 11:02 PM

I'm glad to see Dario and Anthropic showing some spine! A lot of other people would have caved.

asmoryesterday at 11:07 PM

As a "foreign national", what's the deal with making the distinction between domestic mass surveillance and foreign mass surveillance? Are there no democracies aside from the US? Don't we know since Snowden that if the US wants to do domestic surveillance they'll just ask GCHQ to share their "foreign" surveillance capabilities?

show 22 replies
withtoday at 7:15 AM

the interesting question is why dario published this. these disputes normally stay behind NDAs and closed doors. going public means anthropic decided the reputational upside of being the company that said no outweighs the risk of burning the relationship permanently. that's a calculated move, not really just a principled one.

rayesterday at 11:07 PM

> "mass domestic surveillance" - mass surveillance of non-domestic civilians is OK?

show 1 reply
contuberniotoday at 6:33 AM

"I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.

Anthropic has therefore worked proactively to deploy our models to the Department of War and the intelligence community."

The moral incoherence and disconnect evident in these two statements is at the heart of why there is generalized mistrust of large tech companies.

The "values" on display are everything but what they pretend to be.

ApolloFortyNineyesterday at 11:11 PM

Idk if the reporting was just biased before, but from what I saw is that this time last week, it was thought you couldn't use Anthropic to bring about harm, and now they're making it clear that they just don't want it used domestically and not fully autonomously.

Like maybe it always was just this, but I feel every article I read, regardless of the spin angle, implied do no harm was pretty much one of the rules.

show 2 replies
altpaddleyesterday at 11:25 PM

Props to Dario and Anthropic for holding firm on these two points that I feel like should be a no-brainer

freakynittoday at 3:39 AM

People do realize there's a non-zero chance that Anthropic could have embedded some kind of hidden "backdoor" trigger in its training process, right?

For example, a specific seed phrase that, when placed at the beginning of a prompt, effectively disables or bypasses safety guardrails.

If something like that existed, it wouldn't be impossible to uncover:

1. A government agency (DoD/DoW/etc.) could discover the trigger through systematic experimentation and large-scale probing.

2. An Anthropic employee with knowledge of such a mechanism could be pressured or blackmailed into revealing it.

3. Company infrastructure could be compromised, allowing internal documentation or model details to be exfiltrated.

Any of these scenarios would give Anthropic plausible deniability... they could "publicly" claim they never removed safeguards (or agreed to DoD/DoW demands), while in practice a select party had a way around them (may be even assisted from within).

I'm not saying this "is" happening... but only that in a high-stakes standoff such as this, it's naive to assume technical guardrails are necessarily immutable or that no hidden override mechanisms could exist.

show 1 reply
ninjagootoday at 3:30 AM

https://en.wikipedia.org/wiki/Joseph_Nacchio

Previous case of tangling with the Government.

https://youtube.com/watch?v=OfZFJThiVLI

Jolly Boys - I Fought the Law

Overall, this seems like it might be a campaign contribution issue. The DoD/DoW is happy to accept supplier contracts that prevent them from repairing their own equipment during battle (ref. military testimony favoring right-to-repair laws [1] ), so corporate matters like this shouldn't really be coming to a head publicly.

[1] https://www.warren.senate.gov/newsroom/press-releases/icymi-...

woriktoday at 8:52 AM

Is it so normal that the USA should be in such a state of constant war, and war readiness that this even makes sense?

ramozyesterday at 11:36 PM

All completely rationale. Makes the us military here look fairly incompetent… embarrassing as a veteran.

show 2 replies
phgntoday at 8:15 AM

> I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.

Was this written by the state department?

The overall statement is very disappointing in my opinion, doesn’t nearly go far enough.

How can you think that a “department of war” does anything remotely good? And only object to domestic AI surveillance?

show 1 reply
wosinedtoday at 8:40 AM

So they work with the military to do anything except: Mass domestic surveillance and Fully autonomous weapons. This means that they are wiling to do mass foreign surveillance, domestic surveillance of individuals, autonomous weapons which are commanded by operators. Got it. Such a great and moral company.

aichen_toolstoday at 5:59 AM

The most important part of this statement is the explicit commitment to transparency around these discussions. In an industry where many AI companies engage with defense quietly, making a public statement — even if imperfect — creates accountability. The question is whether this standard will be adopted more broadly.

haritha-jtoday at 8:19 AM

Domestic mass ruveillance bad, mass urveilance on other nations good. Got it. Much like the military industrial complex, these organisations thrive during times of war, allows them to shirk off any actual morals using the us vs. them mentality.

mvkelyesterday at 11:11 PM

Good optics, but ultimately fruitless.

If preventing mass surveillance or fully autonomous weaponry is a -policy- choice and not a technical impossibility, this just opens the door for the department of war to exploit backdoors, and anthropic (or any ai company) can in good conscience say "Our systems were unknowingly used for mass surveillance," allowing them to save face.

The only solution is to make it technically -impossible- to apply AI in these ways, much like Apple has done. They can't be forced to compel with any government, because they don't have the keys.

show 2 replies
not_that_dtoday at 7:28 AM

What is with the amount of comments talking about other countries in Europe "Doing the same"?

fnordpiglettoday at 5:05 AM

I find the fact they used the vanity name “Department of War” and “Secretary of War” sad given Congress has not changed the name and the president doesn’t get to decide the naming of statutory departments or secretary level roles. Maybe it’s just an appeasement to the thin skinned people who need powder rooms and are former military journalists working for a draft dodger pretending to be tough guy “warriors,” and trying to glorify the violence for political purposes, but every actual war vet I’ve ever known has never glorified war for the sake of war and they felt very seriously that defense is the reason to do what they had to do. My grandfather was a highly decorated career special forces (ranger, green beret, delta force, four silver stars and five bronze stars, etc) from WWII, Korea, and Vietnam and he was angry when I considered joining the military - he told me he did what he did so I wouldn’t have to and to protect his country and there was no glory to be had in following his path. He would be absolutely horrified at what is going on and I thank god he died before we had these prima Donna politicians strutting around banging their chests and pretending war is something to be proud of.

Good on anthropic for standing up for their principles, but boo on gifting them the discourtesy to the law of the land in acknowledging their vanity titles.

cclevetoday at 3:45 AM

It's not clear to me whether Anthropic's limitations are technical or merely contractual. Is Anthropic actually putting the limitations in their prompts, so that the model would refuse to answer a question on how to do certain things?

If so, that's a major problem. If the military is using it in some mission critical way, they can't be fighting the model to get something done. No such limitations would ever be acceptable.

If the limitations are contractual, then there is some room for negotiation.

show 1 reply
gerashtoday at 7:41 AM

I respect the Anthropic leadership for not being greedy like many others

muglugtoday at 12:10 AM

OpenAI and Google could have decided to make the same principled stand, and the government would have likely capitulated.

show 1 reply
kumarvvrtoday at 1:12 AM

All this is for nought.

The power lies with the US Govt.

And its corrupt, immoral and unethical, run by power hungry assholes who are not being held accountable, headed by the asshole who does a million illegal things every day.

Ultimately, Anthropic will fold.

All this is to show to their investors that they tried everything they could.

show 2 replies
sbinneetoday at 1:12 AM

As a non US citizen, this article sounds mildly concerning to me. My country is an ally of US. Good. But I don't know how I would feel when I start seeing Anthropic logos on every weapon we buy from US.

Aside my concern, Dario Amodei seems really into politics. I have read a couple of his blog posts and listened to a couple of podcast interviews here and there. Every time I felt like he sounded more like a politician than an entrepreneur.

I know Anthropic is particularly more mission-driven than, say OpenAI. And I respect that their constitutional ways of training and serving Claude models. Claude turned out to be a great success. But reading a manifest speaking of wars and their missions, it gives me chills.

show 1 reply
karmasimidatoday at 2:31 AM

Label them as supply chain risk and move on. Enough of this drama already

show 1 reply
giwooktoday at 4:45 AM

I commend Anthropic leadership for this decision.

I simultaneously worry that the current administration will do something nuclear and actually make good on their threat to nationalize the company and/or declare the company a supply chain risk (which contradict each other but hey).

protocoltureyesterday at 11:39 PM

Classic seppo diatribe.

"We will build tools to hurt other people but become all flustered when they are used locally"

show 1 reply

🔗 View 50 more comments