"You are what you won't do for money." is a quote that seems apt here. Anthropic might not be a perfect company (none are, really), but I respect the stance being taken here.
This is at best a superficial attempt to show that Anthropic objects to what is already in play.
Personally, I'd rather live in a country which didn't use AI to supplant either its intelligence or its war fighting apparatus, which is what is bound to happen once it's in the door. If enemies use AI for theirs, so much the better. Let them deal with the security holes it opens and the brain-drain it precipitates. I'm concerned about AI being abused for the two use cases he highlights, but I'm more concerned that the velocity at which it's being adopted to sift and collate classified information is way ahead of its ability to secure that information (forget about whether it makes good or bad decisions). It's almost inconceivable that the Pentagon would move so quickly to introduce a totally unknown entity with totally unknown security risks into the heart of our national security. That should be the case against rapid adoption made by any peddler of LLMs who claims to be honest, to thwart the idiots in the administration who think they want this technology they can't comprehend inside our most sensitive systems.
I commend Anthropic leadership for this decision.
I simultaneously worry that the current administration will do something nuclear and actually make good on their threat to nationalize the company and/or declare the company a supply chain risk (which contradict each other but hey).
This is why I like Dario as a CEO - he has a system of ethics that is not jus about who writes the largest check.
You may not agree with it, but I appreciate that it exists.
What is with the amount of comments talking about other countries in Europe "Doing the same"?
>> We chose to forgo several hundred million dollars in revenue to cut off the use of Claude by firms linked to the Chinese Communist Party
You can’t choose to work with OFAC-designated entities.. there are very serious criminal penalties. Therefore, this statement is somewhat misleading in my opinion.
So they work with the military to do anything except: Mass domestic surveillance and Fully autonomous weapons. This means that they are wiling to do mass foreign surveillance, domestic surveillance of individuals, autonomous weapons which are commanded by operators. Got it. Such a great and moral company.
I was concerned originally when I heard that Anthropic, who often professed to being the "good guy" AI company who would always prioritize human welfare, opted to sell priority access to their models to the Pentagon in the first place.
The devil's advocate position in their favor I imagine would be that they believe some AI lab would inevitably be the one to serve the military industrial complex, and overall it's better that the one with the most inflexible moral code be the one to do it.
I respect the Anthropic leadership for not being greedy like many others
I’m very happy that Anthropic chose not to cave into US Dept of War’s demands but their statement has an ambiguity.
Does this mean they’d be ok to have their models be used for mass surveillance & autonomous weapons against OTHER countries?
A clarification would help.
Ukraine , Russia , China , actively develop ai systems that kill. Not developing such system by US based company will not change the course of actions.
> I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.
Was this written by the state department?
How can you think that a “department of war” does anything remotely good? And only object to domestic AI surveillance?
They made it easy to generate powerpoint presentations, that is the real reason DoW is using them
this is a very chauvinistic approach... why not another model replace anthropic here? I sense because gov people like using excel plugin and font has nice feel. a few more week of this and xAI is new gov AI tool
Domestic mass ruveillance bad, mass urveilance on other nations good. Got it. Much like the military industrial complex, these organisations thrive during times of war, allows them to shirk off any actual morals using the us vs. them mentality.
It may sound crazy, but they should just move the company to Europe or Canada, instead of putting up with this.
"I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries."
That opening line is one hell of a set up. The current administration is doing everything it can to become autocratic thereby setting themselves up to be adversarial to Anthropic, which is pretty much the point of the rest of the blog. I guess I'm just surprised to have such a succinct opening instead just slop.
What is OpenAI's stance on these issues? Are they working with DOW currently?
these guys are selling snake oil to the gvt - cz they know they can get cash based on fear.
the Chinese are releasing equivalent models for free or super cheap.
AI costs / energy costs keep going up for American A.I companies
while china benefits from lower costs
so yeah you've to spread F.U.D to survive
I think it’s a pretty strong statement. It is unfortunately weakened by going along with the “Department of War” propaganda. I believe that the name is “Department of Defense” until Congress says otherwise, no matter what the Felon in Chief says.
"These latter two threats are inherently contradictory"
After the standing up for democracy. This is my favorite part. "Your reasoning is deficient. Dismissed."
Principles are the things you would never do for any amount of money. This might be the only principled tech company in the world.
Probably not a good idea to let Claude vibe-selecting targets, it still sometime hallucinates
I can't help but highlight the problem that is created by the renaming of the Deptartment of Defense to the Department of War:
> importance of using AI to defend the United States
> Anthropic has therefore worked proactively to deploy our models to the Department of War
So you believe in helping to defend the United States, but you gave the models to the Department of War - explicitly, a government arm now named as inclusive of a actions of a pure offensive capability with no defensive element.
You don't have to argue that you are not supporting the defense of the US by declining to engage with the Department of War. That should be the end of the discussion here.
They want to be nationalized, which is the most profitable exit they'll ever get.
If these values really meant anything, then Anthropic should stop working with Palantir entirely given their work with ICE, domestic surveilance, and other objectionable activities.
The call is coming from inside the house
Why would the US security apparatus outsource the model to a private company? DARPA or whatever should be able to finance a frontier model and do whatever they want.
Powerful post - good on him for taking a stand, but questionable in light of their recent move away from safeguards for competitive reasons.
Party balloons along the southern border beware.
Is it so normal that the USA should be in such a state of constant war, and war readiness that this even makes sense?
Bottom line up front it’s probably better to address the root cause of this situation with the general solution — making government drastically smaller and less pervasive in people’s lives and businesses. I remember not too long ago during the last administration very heavy handed unforgivable and traumatizing rhetoric and executive orders that intruded into the bodily autonomy of millions of Americans and threatened millions of American’s jobs. This happened to me and I personally received threats that my livelihood would be taken away from me which were directly a result of the Executive branch. This isn’t a problem where Congress has ceded powers to the Executive branch, it’s a problem that so much power to legislate and tax is in the hands of the government at all! Every election cycle that results in a transfer of power to the other party inevitably results in handwringing and panic but this wouldn’t be the case if citizens voted their powers back and government wasn’t so consequential.
That is frikkin impressive. Well done sir.
I imagine they'll drop this bare-minimum commitment when it becomes financially expedient.
Didn't Dario Amodei ask for more government intervention regarding AI?
Impressive and heartening. Bravo.
I restored my Max sub. I wish they pushed back more, so I went with $100/month only.
Didn't Cheney's company have the option to bid on contracts, by comparison?
Congratulations, you just got a new $200 Claude Max plan customer.
this is.. a nothing burger? they don't exclude working for autonomous weapons, nor do they exclude mass surveillance. so what gives?
Sound like they're following the google playbook, don't be evil, until the shareholders tell you to.
At this point, surveillance state is coming whether Dario does this or not. You can do all that with open source models. It’s sad that we don’t have the right people in charge in govt to address this alarming issue.
It's ok to mass survey foreign entities.
Sounds like following the google playbook, don't be evil, until the shareholders tell you to.
torment nexus creators are shocked, appalled even, to discover that people desire to use it to torment others at nearby nexus
Oh dear, what a mess of a statement that is. He wants to use AI "to defeat our autocratic adversaries", just what or who are they exactly? Claude seems to think they are Russia, China, North Korea and Iran. Is Claude really a tool to "defeat" these countries somehow? This statement also seems pretty messy: "Anthropic understands that the Department of War, not private companies, makes military decisions.", well then just how do they think Claude is going to be used there if not to make or help make military decisions?
The statement goes on about a "narrow set of cases" of potential harm to "democratic values", ...uh, hmm, isn't the potential harm from a government controlled by rapists (Hegseth) and felons using powerful AI against their perceived enemies actually pretty broad? I think I could come up with a few more problem areas than just the two that were listed there, like life, liberty, pursuit of happiness, etc.