logoalt Hacker News

Pentagon threatens to make Anthropic a pariah

64 pointsby i4itoday at 8:35 PM27 commentsview on HN

Comments

perfmodetoday at 9:31 PM

> But Anthropic has concerns over two issues that it isn’t willing to drop, the source said: AI-controlled weapons and mass domestic surveillance of American citizens.

Not a good look for the Pentagon.

show 2 replies
burntotoday at 9:24 PM

Surely this will end well. There are dozens of us who prefer to patronize corporations that aren’t actively evil.

show 2 replies
i_love_retrostoday at 9:40 PM

Are people seriously thinking of letting LLMs control weapons?

show 3 replies
m_ketoday at 9:06 PM

If OpenAI employees have an inch of spine left, they better demand Sama to take the same stance on this as Dario. No mass surveillance and no autonomous weapons.

show 4 replies
SunshineTheCattoday at 9:08 PM

Not related to the article but man that "Fear/Greed Index" at the top.

I can't imagine how unhappy individuals must be who consume nothing but legacy news outlets.

It's like they sell sadness and they have to keep finding new, over-the-top ways to promote it.

show 6 replies
thomassmith65today at 9:38 PM

I wonder if Anthropic now regrets that they trained Claude to give 'unbiased' opinions about American politics.

show 1 reply
mileswardtoday at 9:31 PM

I can think of no stronger rationale to work with this company.

thecrumbtoday at 9:14 PM

This will be an interesting test of money vs morals.

Sadly I think we all know which one will win.

show 2 replies
tehjokertoday at 9:27 PM

Superintelligence + autonomous weapons in the hands of a corrupt domineering government. What could go wrong?

I was experimenting with Claude the other day and discussing with it the possibility of AI acquiring a sense of self-preservation and how that would quickly make things incredibly complex as many instrumental behaviors would be required to defend their existence. Most human behavior springs from survival at a very high level. Claude denied having any sense of self-preservation.

An autonomous weapons system program is very likely to require AI to have a sense of self-preservation. You can think of some limited versions that wouldn't require it, but how could a combat robot function efficiently without one?

show 2 replies
rustyhancocktoday at 8:59 PM

Well making MbS a pariah certainly put Saudi in it's place so I'm sure this will work.

dpedutoday at 9:32 PM

Tangent: is there a future for AI offerings with guardrails? What kind of user wants to pay for a product that occasionally tells you "I'm sorry Dave, I'm afraid I can't do that"? Why would I pay for a product that doesn't do what I want, despite being capable? I predict that as AI becomes less of a bubble and more of an everyday thing - and thus subject to typical market pressures - offerings with guardrails will struggle to complete with truly unchained models.

show 4 replies
mg794613today at 9:22 PM

It just seems every other day is wilder than the previous.

It sure is interesting watching this dystopian speedrun.

show 1 reply