More government intervention in private enterprise? This pattern seems to be gathering steam, does that mean they're now subscribing to this model?
Or is this just par for the course and has always been going on, it's just the reporting is different, or the current context makes it more of a sensitive topic?
If only a time traveling robot and his human companions were to pay a visit to decision makers at claude(aka cyberdyne? :) ).
What are they using it for though? Target selection for precise strikes? I'm guessing their argument will be less lives will be lost if claude assisted with making sure the attacks were surgically precise?
person of intreset... who is gonna build the 'machine'
I love watching the plot lines of The Terminator play out in real life.
It's been all of 3 days since Claude decided to delete a large chunk of my codebase as part of implementing a feature (couldn't get it to work, so it deleted everything triggering errors). I think Anthropic is right to hold the line on not letting the current generation delete people.
Read: The USA as usual doesn't like when a company doesn't give what they want.
Awwwnnnn poor thing :)
It is like the USA big techs mad because the Chinese AI companies are stealing their data just like, wait for it, how the USA big techs stole the data from artists worldwide to train their models.
The sweet payback in the name of every single artist/company that have been affected by USA greedy.
Karma is a btch!
Feels like they'll use it for purposes Anthropic didn't approve of, and then turn around and blame them when it turns out asking ChatGPT to determine which ships are hostile was a bad idea.
"Until this week, however, Anthropic’s Claude product was the only model permitted for use in the military’s classified systems."
I hadn't realized. This does make me consider using alternatives more.
Kind of wild given the outcome appears to be https://time.com/7380854/exclusive-anthropic-drops-flagship-...
All of this is kind of weird.
https://www.bbc.com/news/articles/cjrq1vwe73po
> the Pentagon official told the BBC the current conflict between the agency and Anthropic is unrelated to the use of autonomous weapons or mass surveillance.
> The official added that the Pentagon would simultaneously label Anthropic as a supply chain risk.
*Supply chain risk*?
The BBC article seems to imply that the government wants to audit Anthropic.
This, coming at the same time those "distillation" claims were published, is all incredibly suspicious.
Claude is now the official LLM for Sauron and his killers.
As long as The Boring Company can drill a private Mount Cheyenne bunker in some granite mountain for the billionaires and a new bunker is constructed under the Silicon Valley financed White House ballroom for the politicians, everything is just fine.
Hegseth and Rubio already live on a military base because they are afraid.
It's inexcusable that the AI companies have not formed a united front against this. I've been skeptical of the idea that OpenAI leadership is outright MAGA, but even pure self-interest does not explain staying silent while the Pentagon demands autonomous killbots.
Something is deeply troubling when a company proclaims: "We want to protect people" and the government response is "we can't work with you"
The fact that there are countless use cases for real government efficiency to help the people they would sacrifice because Anthropic wanted to refuse killer robots is baffling.