logoalt Hacker News

jmward01today at 2:10 AM3 repliesview on HN

"Until this week, however, Anthropic’s Claude product was the only model permitted for use in the military’s classified systems."

I hadn't realized. This does make me consider using alternatives more.


Replies

thephybertoday at 3:12 AM

This is most likely because getting a SaaS software to conform to federal regulations and to promise the security needed by the US military is difficult and expensive. FedRAMP is onerous.

And LLM products Are new-ish. It suggests that Anthropic made federal government contracts a priority while OpenAI, Alphabet, AWS didn’t.

show 1 reply
skeptic_aitoday at 2:34 AM

They always focused on the safety. (Their own safety). They only backed off from us military once they were in the bad press. As usual, they are not an ethical company. I can’t say it’s bad as all corporations are the same. Just don’t look at the illusion they create.

If you look at my post history you can see I’m always calling them out about how sketchy they are.

LordDragonfangtoday at 3:33 AM

It's a little weird, too, because Claude definitely isn't the only one approved for use on classified systems in general; both Grok and OpenAI have models approved, at the very least.

https://devblogs.microsoft.com/azuregov/azure-openai-authori...

https://x.ai/news/government