logoalt Hacker News

6Az4Mj4Dtoday at 12:08 AM13 repliesview on HN

Leaving autonomous weapons aside, how does Anthropic justifies that they signed up with surveillance company Palantir and now raising concerns for same surveillance with DoD?

It doesn't match.


Replies

pfishermantoday at 1:22 AM

This is very easy to explain. Anthropic outlines some limitations in their terms of service. Palantir accepted those terms. The DoD did not.

OpenAI claims their terms of service for DoD contain the same limitations as Anthropics proposed service agreement. Anthropic claims that this is untrue.

Now given that (a) the DoD terminated their deal with Anthropic, (b) stated that they terminated because Anthropic refused modify their terms of service, and (c) then signed a deal with openAI; I am inclined to believe that there is in fact a substantial difference between the terms of service offered by Anthropic and OpenAI.

show 4 replies
dmixtoday at 12:46 AM

> signed up with surveillance company Palantir

Just to nitpick, Palantir isn't doing surveillance like Flock. They do data integration the way IBM does under contract for the governments. Some data pipelines include law enforcement surveillance data which get integrated with other software/databases to help police analyze it. There's no evidence they are collecting it themselves despite recent headlines. It's a relatively minor but important distinction IMO.

https://www.wired.com/story/palantir-what-the-company-does/

show 7 replies
ekjhgkejhgktoday at 12:34 AM

It might match. The red line was domestic surveillance. You don't know what deal they had. Giving Anthropic the benefit of the doubt, perhaps Palantir said "Deal, we won't use your tool domestically".

show 1 reply
tbrockmantoday at 12:42 AM

Whether you disagree with whether it truly aligns with their stated values, in their partnership with Palantir (making Claude available within their AI platform) they requested consistent restrictions:

> “[We will] tailor use restrictions to the mission and legal authorities of a government entity” based on factors such as “the extent of the agency’s willingness to engage in ongoing dialogue,” Anthropic says in its terms. The terms, it notes, do not apply to AI systems it considers to “substantially increase the risk of catastrophic misuse,” show “low-level autonomous capabilities,” or that can be used for disinformation campaigns, the design or deployment of weapons, censorship, domestic surveillance, and malicious cyber operations.

Source: https://techcrunch.com/2024/11/07/anthropic-teams-up-with-pa...

sigmartoday at 1:14 AM

Why do you assume the contract with palantir doesn't have similar terms? Weird assumption.

elevationtoday at 1:12 AM

The moral disposition of the Anthropic leaders doesn't matter because they don't own the company. Investors won't idly watch them decimate billions in ROI by alienating the largest institutional customers on the planet.

show 1 reply
trinsic2today at 1:05 AM

This exchange between Anthropic and OpenAI feels a lot like theater. If I was really trying to stop abuses I wouldn't going out of my way to talk about it. The "public sees us as the hero's" bullshit feels like a smoke screen. Id make one statement and keep silent and let the public do the math and not get involved.

Madmallardtoday at 1:13 AM

They are all guilty.

freejazztoday at 12:43 AM

It's just marketing.

show 1 reply
throwaway613746today at 12:18 AM

[dead]

EA-3167today at 12:57 AM

[flagged]

bkotoday at 1:31 AM

[flagged]

show 5 replies
spaghetdefectstoday at 1:04 AM

Thank you. Anthropic also is culpable in the illegal war against Iran that started with the bombing and murder of an entire girls school.

https://www.cbsnews.com/news/anthropic-claude-ai-iran-war-u-...

show 1 reply