Around 10 years ago, in college, in Calculus class I had a very ambitious classmate, wanted to go to DARPA and work on Robotics. I asked if he was thinking it through solely from technical perspective or considering ethics side as well. Clearly, he didn't understand the question and I directly inquired - what if the code you write or autonomous machine you contribute to used for killing? His response - that's not my problem.
After spending couple of years studying in the US, I came to conclusion that executives and board members in industry doesn't care about society or humans, even universities don't push students towards critical thinking and ethics, and all has turned into a vocational training, turning humans into crafting tools.
The same time, at Harvard, I attended VR innovation week and the last panel discussion of the day was Ethics and Law, which was discussed by Law Professor, a journalist and a moderator and was attended a handful of people. I inquired why founders, CEOs or developers weren't in part of the discussion or in attendance? Moderator responded that they couldn't find them qualified enough to take part in the discussion. The discussion basically was - how product companies build affects the society? Laws aren't founders problem, that's what lawyers are for, and ethics - who cares, right?
This frenzy, this rat race towards next billion dollar company at any cost, has tore down the fabric of the society to the individual thinking level; or more like not thinking, just wanting and needing.
I would say to you who would equivocate and dither about lending your skills to a morally and ethically compromised war machine in exchange for a fat paycheck, the same thing that I teach my children:
"Everything and I mean everything can be taken from you except your integrity, only you can give that up"
> Our most important priority right now is making sure that our warfighters and national security experts are not deprived of important tools in the middle of major combat operations.
> we had been having productive conversations with the Department of War over the last several days, both about ways we could serve the Department that adhere to our two narrow exceptions, and ways for us to ensure a smooth transition if that is not possible.
Why are people leaving openAI when this is Anthropic's stance? Are their two narrow requirements enough to draw the ethical boundary people are comfortable with?
Raised an eyebrow a little at this sentence: "Anthropic has much more in common with the Department of War than we have differences."
Nothing brings home the Orwellian nature of USA 2026 more for me than the word "warfighter".
To state the obvious, I think when corruption and power in government go unchecked, companies eventually end up facing situations like this. It’s almost like making a deal with the devil.
At the beginning, they’re usually doing it for the money — and maybe some level of patriotism. Eventually they find themselves involved in things so ugly that they can’t really stomach it anymore. At the same time, they can’t easily back out either.
Then a new CEO comes in and thinks the previous guy was too soft, "He couldn’t handle it, but I can."
And the cycle continues.
Everyone knows that the companies have to comply, so a company trying to convince the public that they can choose to not comply, is just telling a lie. I don't understand why Anthropic tries damage control here. Why not just admit that all the data given to them, is also used for war-purposes? We currently see the build-up of a much larger warfare. These things are inter-connected. Even more so when some of it is done for politics (e. g. re-election or simple election "boosters"; reminds me of the old movie Manufacturing Consent or the follow-up "brother" Wag the dog).
It'll be very interesting to see how this case gets resolved - in court and in the court of public opinion. I believe it's incredibly important and I hope they prevail.
At this moment, I think we should have politics in left, right and center of our workplaces and life discussions everywhere. If you are not explicit with your stance then you are going to dragged along without your choice.
BTW, this deal went south when Anthropic argued that AI systems should never make kill decisions without meaningful human oversight.
Source: https://www.theguardian.com/us-news/2026/feb/26/anthropic-pe...
Not sure why Dario apologized for the internal memo leak. Seems like an odd thing to backtrack on.
Messages about project Maven, Palantir and Anthropic integration are flagged by certain interest groups:
"Palantir's Maven uses Anthropic's Claude code, sources say."
https://www.reuters.com/technology/palantir-faces-challenge-...
It is always astonishing that the reviled mainstream press is more critical than hackers these days.
Under Secretary of War Emil Michael posted that there is no active negotiation with Anthropic:
It has become a moral imperative to not work on this technology that is meant to replace us and the one thing that has separated us from machine and beast.
Slow it down as much as possible to give us more time.
> I apologize for the tone of the post
What a world we live in now where private companies are apologising for the "tone" of their speech while official representatives of the government daily express blatant lies and misrepresentations without the slightest fear of consequence.
It really is incredibly sad that what was one of the most respected countries in the world has descended to this - an utter mockery of a functioning democracy.
Why can't companies/governments make weapons that capture autonomously instead of killing in the same fashion?
I don’t feel that old, but I guess being 45 is ancient in tech.
The Silicon Valley tech jobs we have now has a history rooted in World War 2 and funding of it by the US gov.
https://youtu.be/ZTC_RxWN_xo?si=gGza5eIv485xEKLS
I’m not saying war is good or anything, but also don't ride a high horse cause none of it would be here w/o WW2.
Could they please start using the correct name? Department of Defense?
The Anthropic CEO/team should have learned to just say nothing.
Or more importantly - say something that says nothing.
When you say nothing to politicians like this then eventually the story moves elsewhere.
But these guys had to put a stake in the ground and yell it out loud.
In politics you must know when to speak and what to speak and how to speak without speaking.
Long time ago I worked for a company that I learned was selling it's software to help target people during the Iraq war. I quit because I cannot support building software that kills people.
This is a message to people working for that line of business at Anthropic. You don't have to do it, you can quit. If you are helping this insane administration to conduct war on Iran quit. You don't need to have that kind of blood on your hands.
I saw a someone's hypothesis that a generative model was used to help classify buildings to decide what to bomb and that the Girls school was misclassified. If this was an Anthropic model, I can't imagine what it feels like being a worker there in that line of business.
DoD still has not meaningfully moved to the DoW moniker, to me it represents the most fascist tendency, to make announcements and presume that’s enough to change the truth on the ground. The legal entity one contracts with is DoD. Going along with “DoW” is signal to me that a party has capitulated to the most absurd form of governance.
This is turning into just another reality show. There are no adults anymore.
The OpenAI astroturfers jumped on this one. Their only interest is in trying to spin Anthropic as not meaningfully better to dissuade people from switching, not to get people to drop both companies altogether.
I built a website that shows a timeline of recent events involving Anthropic, OpenAI, and the U.S. government.
Posted here: https://news.ycombinator.com/item?id=47195085
What do we think are the chances that the government is attempting to destroy Anthropic’s value so they can buy it for pennies on the dollar?
A lot of people downvoted me for saying the messaging of the internal post was bad. Good to see Dario is smart enough to see that it was a bad look.
I don't think we won't get AGI if Anthropic were to implode, and frankly, right now, I'd rather have someone say clearly, "They cannot stomach the existence of someone telling them 'No' or adhering to moral principles. Like spoiled children they can't hear the former and are terrified by later because it might expose them to the condemnation they deserve."
This is reflection of corruption in the system that you cannot escape. No one is calling out Trump on his corruption, illegal use of powers and pathetic behavior, killing of people and setting up world war 3. And we call out others. We need to stay strong. If it comes to world war 3, we all lose.
What's next, bribing Trump with gold bars and donations to "charity"?
always funny af to see all the ugly loser (just because you went to MIT and raised XXX billion doesn't mean you aren't a loser who doesn't care for anything but YOURSELF) dorks and nerds who thought "the Empire were the good guys" finally get placed in the action seat as they help build the Death Star
thankfully, the giga Chads always win against the incel dorks and nerds in the end
So is this a backtrack or clarification on their original stance? Do I need to be worried about skynet killing grandma?
Cringing every time I see the word "warfighter", and disappointed that they're still pushing to keep that contract.
- Companies need to please Trump exist - CEOs can no longer speak on issues which might hurt the go of president - Freedom of expression is limited to freedom to support Trump
Trump is the communist nobody warn you about :-D
"As we wrote on Thursday, we are very proud of the work we have done together with the Department, supporting frontline warfighters with applications such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more."
Trump admins censorship is just as bad as Bidens. We need an administration that doesn't abuse the power of the government in the free market
The rich people are fighting with each other again.
The internal memo did read as fairly unhinged and political, which is not the message Dario likes to present. I'm glad he addressed this. It was unprofessional and unhelpful - even if Sam Altman is, in fact, a disgusting lunatic.
[dead]
[dead]
[dead]
[dead]
It looks like AI safety swings both ways - the government has deemed Anthropic unsafe for them. They asked for regulation and got it served to them.
[flagged]
It is incredible how far the overton window has moved on this issue.
When I graduated in 2007, it was common for tech companies to refuse to let their systems be used for war, and it was an ordinary thing when some of my graduating classmates refused to work at companies that did let their systems be used for war. Those refusals were on moral grounds.
Now Anthropic wants to have two narrow exceptions, on pragmatic and not moral grounds. To do so, they have to couch it in language clarifying that they would love to support war, actually, except for these two narrow exceptions. And their careful word choice suggests that they are either navigating or expect to navigate significant blowback for asking for two narrow exceptions.
My, the world has changed.