There are many claims here that Anthropic wants to enforce things with technology and OpenAI wants contract enforcement and that OpenAI's contract is weaker.
Can someone help me understand where this is coming from? Anthropic already had a contract that clearly didn't have such restrictions. Their model doesn't seem to be enforcing restrictions either as it seems like their models have been used in ways they don't like. This is not corroborated, I imagine their model was used in the recent Mexico and Venezuela attacks and that is what's triggering all the back and forth.
Also, Dario seemingly is happy about autonomous weapons and was working with the government to build such weapons, why is Anthropic considered the good side here?
This is incorrect, their existing contract had these red lines and more until this January 9th memo: https://media.defense.gov/2026/Jan/12/2003855671/-1/-1/0/ART... which led to DoW trying to renegotiate under the new standard of “any lawful use”. Anthropic never tried to tighten standards beyond what had been in their original contract; DoW tried to loosen them.