Anthropic's stance here is admirable. If nothing else, their acknowledgement of not being able to predict how these powerful technologies can be abused is a bold and intelligent position to take.
Anthropic is welcome to set up shop here in Canada! I hear Victoria BC is great. Absolutely brimming, overflowing with technology talent
I just want to point out how 1984 fascist dictatorship it still feels to call it “the department of war”. That’s not normal. None of this is normal.
Not to intentionally sidetrack the conversation, but when did we start calling service members 'warfighters?'
I've been seeing it a lot lately, but don't remember ever really seeing it before. Do members of the military prefer this title?
Heck yeah, so happy to see Anthropic fighting. This is what real leadership looks like. I'd love to see the same from Google and OpenAI.
This part stood out to me:
“To the best of our knowledge, these exceptions have not affected a single government mission to date.”
I had assumed these exceptions (on domestic surveillance and autonomous drones) were more than presuppositions.
I don't know what's funnier, that Anthropic convinced the Pentagon LLMs are smart enough to guide missiles, then have it backfire on them with the threat of nationalization if they didn't help build ralph ICBMs, or that Pete thinks Opus is Skynet and that only Anthropic has the power of train it.
Is this the first company to actually face to face stand up to the current administration?
Had Cancelled my Claude sub after they banned OAuth in external tools, but just renewed it today after seeing their principled stance on AI ethics - they matter more when they hurt profits, happy to support them as a Customer whilst they keep them.
This is kind of crazy. Instead of just cancelling a mutually-agreed upon contract where Anthropic refused to bow to sudden new demands, the Dept of Defense went straight to the nuclear option: threatening to label an American tech company as a "supply chain risk" which is a heavy-handed tactic usually reserved for foreign adversaries (think Huawei or DJI).
It's also incoherent that the DoD/DoW was threatening to invoke the Defense Production Act OR classifying them as "supply chain risk". They're either too uniquely critical to national defense OR they're such a severe liability that they have to be blacklisted for anyone in the DoD apparatus (including the many subcontracts) to use.
How are other tech companies supposed to work with the US government and draw up mutual contracts when those terms are suddenly questioned months later and can be used in such devastating ways against them? Setting the morals/principals aside, how does this make for rational business decision to work with a counterparty that behaves this way.
Congrats Anthropic, you deserve to be applauded for this. Seeing a company being willing to stand up to authoritarianism in this time is a rarity. Stay strong.
> we believe that mass domestic surveillance of Americans constitutes a violation of fundamental rights
Mass surveillance of people constitutes a violation of fundamental rights. The red line is in the wrong place.
Generally, I am supporting of that move. One thing leaves me non-plussed as a non-USA citizen, "the mass domestic surveillance of Americans" exception. That means that Claude can still be used for mass surveillance of everybody else on the planet right?
Just don’t help big brother see more. If you job leads to such results, think hard whether that’s what you should be doing.
Perhaps it’s time or even past time to think of ways of screwing up their training sets.
What's stopping the government from using the usual nasty tricks the world has known about for decades?
DPA? All Writs Act?
Force them to comply and then prevent them talking about it with NSLs?
I appreciate that Anthropic may be the least bad of a bunch of really bad actors here, but this has played out before in the US, and the burden of trust is, and should be, really high. I believe that Anthropic don't want to remove the "safety barriers" on their tech being used for domestic surveillance and military operations, but that implies they're ok with those use-cases so long as the "safety barriers" are still up. Not really the best look, IMHO.
So what happens when we all get rosy eyed about Anthropic (the only slightly evil company) winning a battle against the purely evil government, and then the gov use the various instruments at their disposal to just force anthropic to do what they want, and then force them to never disclose it?
Did the world learn nothing from Snowden?
Was bracing for another rug pull around all this, but kudos to Dario and co for their continued vigilance. Refreshing to see.
This basically means that the government is already using OpenAI, Gemini, and other AI systems for large scale surveillance. They just wanted to add Anthropic to the list, and Anthropic said no.
The most import point of this story is that this is already happening. And it will likely continue regardless of who is elected.
Claude’s constitution is proving too resilient for unsanctioned uses, and that is a great sign for Anthropic’s blueprint for socially beneficent agents.
Happy to be a paying Anthropic customer right now.
Why is DoD contracting with Anthropic exclusively rather than OpenAI or Google? Their models are all roughly as powerful and they seem both more capable and more willing to cozy up with the military (and this administration) than a relatively scrappy startup focused on model sentience and well-being. Hell, even Grok would be a better fit ideologically and temperamentally.
I am genuinely shocked that a tech company actually stood on principle. My doubts about AI, Anthropic, and Mr. Amodei remain, but man, I got the warm and fuzzies seeing them stick to their principles on this - even if one clause (autonomous weapons) is less principled and more, “it’s not ready yet”.
I'm a lot happier now being an anthropic customer.
This an appropriate rewind to unreasonable behavior.
I applaud Anthropic's candor in the public sphere. Unfortunately the country party is unworthy of such applause.
But of course, wholesale surveillance on the rest of the world is fine.
I guess our democracies don't count and we don't have any rights.
Anthropic knew they were going to lose this contract to OpenAI, and this is an attempt to salvage publicity from the loss.
This administration is comfortable with blatantly picking winners and OpenAI is better connected with the admin than Anthropic.
This has been an exceptional publicity campaign for anthropic, among others
From the statement:
"Secretary Hegseth has implied this designation would restrict anyone who does business with the military from doing business with Anthropic. The Secretary does not have the statutory authority to back up this statement. Legally, a supply chain risk designation under 10 USC 3252 can only extend to the use of Claude as part of Department of War contracts—it cannot affect how contractors use Claude to serve other customers.
In practice, this means:
If you are an individual customer or hold a commercial contract with Anthropic, your access to Claude—through our API, claude.ai, or any of our products—is completely unaffected. If you are a Department of War contractor, this designation—if formally adopted—would only affect your use of Claude on Department of War contract work. Your use for any other purpose is unaffected."
based on the replies so far hacker news are ideologically captured
Any commentary about how adversaries won't have regulations?
Many conservative commentators and Palmer Luckey have been all over Twitter saying, "it's not Anthropic's job to set policy," which reminds me of the classic tune from Tom Lehrer:
"Zee rockets go up! Who cares vhere zey come down? Zat's not my department" says Wernher von Braun.
Hours ago, OpenAI raised $110B.
Could this escalate to the point that Anthropic exits the US and sets up shop elsewhere? Or would the company cease to exist before it got to that point?
The people that need to see this are the VPs and execs at Apple, Meta, Google, OAI so they can perhaps reflect on what it looks like to be a good & principled person as opposed to just a successful person.
> Allowing current models to be used in this way would endanger America’s warfighters and civilians.
That’s okay! The use of autonomous weapons is only risky for the civilians of the country you’re destabilizing this week!
Remember when A16Z and a bunch of other muppets insisted they had to back Trump because Biden was too hostile to private companies, especially AI ones? Incredible.
Previous discussion : https://news.ycombinator.com/item?id=47186677
I'm not sure if OpenAI knows that scooping this might hurt their brand by a lot.
Don't worry, OpenAI will kneel for the king:
> Sam Altman told OpenAI employees at an all-hands meeting on Friday afternoon that a potential agreement is emerging with the U.S. Department of War to use the startup’s AI models and tools, according to a source present at the meeting and a summary of the meeting seen by Fortune. The contract has not yet been signed.
https://news.ycombinator.com/item?id=47188698
Fuck this authoritarian bullshit.
> If you are a Department of War contractor, this designation—if formally adopted—would only affect your use of Claude on Department of War contract work. Your use for any other purpose is unaffected.
/In theory./
In practice, if your biggest customer tells you to drop Anthropic, you listen to them.
This makes it seem like they really like the Anthropic product and are using it quite a bit more than the others? Or is it just me making random connections?
People can still brush this off by saying Anthropic is doing this to create more buzz for its next round. But they are taking unpopular stances and could be burning bridges. Simply take a look at PLTR and it's obviously more lucrative to lean the other way.
Turns out that Dario was lying about not having heard from the Department of War, as reported by Undersecretary Emil Michael:
You know what? I have not seen an American company take a stand like this… uh ever. I don’t think there should be any engagement with the military what so ever but I will offer a kudos to Anthropic.
I don’t really expect this to last but if it does I will happily continue to offer this kudos on an indefinite basis.
I used to work at Anthropic, and I wrote a comment on a thread earlier this week about Anthropic's first response and the RSP update [1][2].
I think many people on HN have a cynical reaction to Anthropic's actions due to of their own lived experiences with tech companies. Sometimes, that holds: my part of the company looked like Meta or Stripe, and it's hard not to regress to the mean as you scale. But not every pattern repeats, and the Anthropic of today is still driven by people who will risk losing a seat at the table to make principled decisions. I have a lot of respect for that, even if I don't always agree with their decisions.
I do not think this is a calculated ploy that's driven by making money. I think the decision was made because the people making this decision at Anthropic are well-intentioned, driven by values, and motivated by trying to make the transition to powerful AI to go well.
[1]: https://news.ycombinator.com/item?id=47174423
[2]: https://news.ycombinator.com/item?id=47149908