Good to see one AI company not selling out their values in exchange for military contracts. This shouldn't be rare, but it is. Good for them.
”Defense of democracy” is just another version of ”think of the children”.
Amodei’s use of “warfighters” (a Hegseth-era neologism for “soldiers”) is truly nauseating.
It's the Department of Defense, not the Department of War ... only Congress has the legal authority to change the name, and they haven't.
Keep in mind: the government is very invested logistically in Anthropic.
So no matter what xAI or OpenAI say - if and when they replace that spend - know that they are lying. They would have caved to the DoW’s demands for mass surveillance.
Because if there were some kind of concession, it would have been simplest just to work with Anthropic.
Delete ChatGPT and Grok.
> Our strong preference is to continue to serve the Department and our warfighters—with our two requested safeguards in place.
It's absolutely disgusting that they would even consider working with the US government after the Gaza genocide started. These are modern day holocaust tabulation machine companies, and this time randomly they are selecting victims using a highly unpredictable black-box algorithm. The proper recourse here is to impeach the current administration, dissolve the companies that were complicit, and send their leadership to the hague for war crimes trials.
Big respect
Total humiliation for Hegseth, sure there will be a backlash
Brigadier General S. L. A. Marshall’s 1947 book Men Against Fire: The Problem of Battle Command stated that only about 10-15% of men would actually take the opportunity to fire directly at exposed enemies. The rest would typically fire in the air to merely scare off the men on the opposing force.
I personally think this is one of the most positive of human traits: we’re almost pathologically unwilling to murder others even on a battlefield with our own lives at stake!
This compulsion to avoid killing others can be trivially trained out of any AI system to make sure that they take 100% of every potential shot, massacre all available targets, and generally act like Murderbots from some Black Mirror episode.
Anyone who participates in any such research is doing work that can only be categorised as the greatest possible evil, tantamount to purposefully designing a T800 Terminator after having watched the movies.
If anyone here on HN reading this happens to be working at one of the big AI shops and you’re even tangentially involved in any such military AI project — even just cabling the servers or whatever — I figuratively spit in your eye in disgust. You deserve far, far worse.
One piece of context that everyone should keep in mind with the recent Anthropic showdown - Anthropic is trying to land British [0], Indian [1], Japanese [2], and German [3] public sector contracts.
Working with the DoD/DoW on offensive usecases would put these contracts at risk, because Anthropic most likely isn't training independent models on a nation-to-nation basis and thus would be shut out of public and even private procurement outside the US because exporting the model for offensive usecases would be export controlled but governments would demand being parity in treatment or retaliate.
This is also why countries like China, Japan, France, UAE, KSA, India, etc are training their own sovereign foundation models with government funding and backing, allowing them to use them on their terms because it was their governments that build it or funded it.
Imagine if the EU demanded sovereign cloud access from AWS right at the beginning in 2008-09. This is what most governments are now doing with foundation models because most policymakers along with a number of us in the private sector are viewing foundation models from the same lens as hyperscalers.
Frankly, I don't see any offramp other than the DPA even just to make an example out of Anthropic for the rest of the industry.
[0] - https://www.anthropic.com/news/mou-uk-government
[1] - https://www.anthropic.com/news/bengaluru-office-partnerships...
[2] - https://www.anthropic.com/news/opening-our-tokyo-office
[3] - https://job-boards.greenhouse.io/anthropic/jobs/5115692008
[dead]
[dead]
[dead]
[dead]
[dead]
TLDR: « depends on where you live »
[dead]
[dead]
[dead]
[flagged]
[flagged]
There is no Department of War. This is the dumbest fucking timeline.
This is a PR play by Anthropic, likely in coordination with the administration. They don't care, they just need the public to view them as a victim here, and then its business as usual.
I prefer they get shutdown, llms are the worst thing to happen to society since the nuclear bomb's invention. People all around me are losing their ability to think, write and plan at an extraordinary pace. Keep frying your brains with the most useless tool alive.
Remember, the person that showed their work on their math test in detail is doing 10x better than the guys who only knew how to use the calculator. Now imagine being the guy who thinks you don't need to know the math or how to use a calculator lol.
The Pentagon should be using open models, not closed ones by OpenAI/Anthropic/xAI. The entire discussion of what Anthropic wants is therefore moot.
I have read the whole thing but I nonetheless want to focus on the second paragraph:
> Anthropic has therefore worked proactively to deploy our models to the Department of War
This should be a "have you noticed that the caps on our hats have skulls on it?" moment [1]. Even if one argues that the sentence should not be read literally (that is, that it's not literal war we're talking about), the only reason for calling it "Department of War" and "warfighters" instead of "Department of Defense" and "soldiers" is to gain Trump's favor, a man who dodged the draft, called soldiers "losers", and has been threatening to invade an ally for quite some time.
There is no such a thing as a half-deal with the devil. If Anthropic wants to make money out of AI misclassifying civilians as military targets (or, as it has happened, by identifying which one residential building should be collapsed on top of a single military target, civilians be damned) good for them, but to argue that this is only okay as long as said civilians are brown is not the moral stance they think it is.
Disclaimer: I'm not a US citizen.
Principles are the things you would never do for any amount of money. This might be the only principled tech company in the world.
Wow, I expected them to cave, and they did'nt!
I'll be signing up to Claude again, Gemini getting kind of crap recently anyway.
This seems to be at least partially written by AI: There is no Department of War, it is called the Department of Defense.
They essentially said "we're not fans of mass surveilance of US citizens and we won't use CURRENT models to kill people autonomously" and people are saying they're taking a stand and doing the right thing? What???
I guess they're evil. Tragic.
Why I built this: I’ve always felt that GitHub stars alone don’t tell the full story of a project's impact. I wanted to see if I could quantify the effort and "financial worth" behind a repository, even if just as a fun estimate. It started as a way to check the value of my own side projects and grew from there.
How it works: The tool fetches real-time data from the GitHub API. The valuation algorithm takes into account several factors:
Total stars and forks (popularity).
Commit frequency and recent activity (maintenance level).
Number of contributors (community strength).
Repo age and issue activity.
The Tech Stack: The app is built with Next.js and Tailwind CSS, and it’s deployed on Vercel. I tried to keep it as lightweight and fast as possible.
I’d love your feedback: Is the valuation logic too optimistic or too conservative? What other metrics should I include to make the estimate more "realistic" for the open-source world?
My man