It's worrying that they don't specify in which cases they require identity checks.
I figured they already have your identity via the payment process. Not like you can do anything (risky or not) via the free tier.
Why do companies keep working with Persona even though they have proven time and time again to be untrustworthy?
Identity verification to use an API?? And via Persona? I can't say if it's real. But if they really try to enforce that, I guess goodbye Anthropic forever.
Time to setup my own local LLM.
In the old USSR one had to register a typewriter. Sweet memories. And at that time western people (deservedly) laughed at it or used facts like this to show how backwards the country was
You will be reported to DHS if you ask Claude about Maven and the bombing of girls schools in Iran.
"Being responsible with powerful technology starts with knowing who is using it."
In other words: they want to create a private web and sniff-after-people system. Today the EU also introduced an app for age verification. They also constantly say how this is ... voluntary.
Well, I guess we all know the direction. Let's have a look at this in a few years, because there may be a few ... suspicions.
With regards to Claude the question is: WHY do they want to sniff off user data exactly?
Yeah, absolutely not.
This is highly problematic.
I may consider showing my ID to a company I already have a business relationship with; given demonstrable legal obligations, contractual necessities, legitimate interests etc . Eg the standard GDPR list.
I do have an existing business relationship with Anthropic, so I might under some circumstances decide to show them my id. I don't have a business relationship with Persona though.
I understand the instinct: they want to insulate themselves from holding PII. Not the worst idea. I'm not happy with it being a third party though. Especially the third party in question.
The under-18 detection is also error prone, seems simpler to me to initiate a gdpr data rrequest, archive it and then make a new account.
This is deranged. Say you wanted to use AI to prepare whistleblowing submission to use regulatory language and test for any weak points. Then Claude flags it and requires you to identify yourself. It's not a stretch of imagination that before you manage to send the bundle, you find yourself in the suitcase somewhere in the woods. People explore all kinds of sensitive stuff and I see it is tempting for AI companies to see exact person behind it and then it takes one disgruntled employee to put lives in danger. WTF
Ugh what a disaster. This is so Anthropic can enforce bans.
The future has arrived, in which you are only allowed to program a computer in any meaningful way requires total identification and permission.
What a tragedy that the amazing capabilities of LLM assisted programming come with such disgusting and reprehensible requirements and impositions.
So they can ban you from some minor infringement of their usage policies and you'll never be allowed to program again.
"Mr Anderson, it has come to our attention that you have been programming computers under an assumed identity. As you are aware this is a felony under the computer fraud and hacking act and you will be sentenced to four years in jail and may never use a computer again.". Yes laugh it up.
No. At least until there are actual KYC laws for LLM access in my country...
Persona is bad news. They should not be using Persona. This is bad.
> Your ID and selfie are collected and held by Persona, not on Anthropic's systems. Anthropic can access verification records through Persona's platform when needed—for example, to review an appeal—but we don't copy or store those images ourselves.
It's unacceptable that this data is persisted at all, let alone that it's persisted by Persona.
> Persona is contractually limited in how they can use your data: only to provide and support verification and to improve their ability to prevent fraud. They're bound to protect it with industry-standard security controls and delete it in line with the retention limits we've set and applicable law.
It's good to hear that they're criminals. That means nothing for me though. Nothing.
> Why did my account get banned after verification?
This is bad. Why do they wait to ban until after they have your personal info? Venmo did the same thing to me: They didn't tell me I was banned until they had my ID. Absolutely despicable practice.
---
Anthropic is one of my favorite AI companies because they get LLMs more right than anyone else I've seen. But unfortunately this also means they can be swindled by social manipulation in lieu of technical excellence; the same type of brain results in both, I've seen it.
Persona is a bout of sociopaths, and it shows: they're worming their way into everything despite the well-documented conspiracy. They're doing it out in the open with zero consequences.
Why is this necessary if I'm paying Anthropic with a credit card? A credit card requires a) credit worthiness, b) a line of credit from a bank based on the individual's identity, and c) regular payments. Why isn't a credit card enough? Why can't certain features be paywalled?
If someone is doing something deeply unethical with Claude, let's say they're using a clade of Claudes to launch cyberattacks, then doesn't Anthropic have fine grained telemetry, payment history, API usage / prompting / requests, and other details necessary to investigate? What does a government photo ID provide Anthropic that these data points don't?
At this point, people usually ask "what if they use stolen credit cards?" or are "state backed?" then well... if they're state backed / using stolen credit cards, then they're also capable of using stolen IDs or state-sponsored "legitimate" IDs.
It doesn't make much of a difference to organized crime / state backed assets. Or, Anthropic. But it makes A HUGE difference for entrepreneurs, founders, and just plain old consumers who use the service.
It's an asymmetric risk.
It's one thing for your credit card to leak, you can get a new one. It's harder for lower-tier / dumber criminals to socially engineer into your personal information for impersonation / ID theft with just a credit card number. But it becomes a lot easier with your scans of your ID.
Unless you're connected with an org of interest, have b/millions in crypto, most better organized groups / state actors won't usually (no guarantees) steal your identity. Identity theft is very much a SME operation in cybercrime.
So when Persona inevitably gets compromised and everyone's personal IDs inevitably gets leaked, the threat posed to entrepreneurs, founders and consumers is higher than the inverse.
I don't understand why Anthropic would expose themselves to the liability; when arguably they have all the tools baked right in.
I don't use their tool for writing. Perhaps it's ego, but I think I'm a better writer. But I shared the above text and asked Claude Opus 4.6 on Max thinking, "What would you say about the argument that the Anthropic has the best tool for threat prevention baked right in?"
Claude is the threat prevention. It's sitting between every user and every potential misuse, in real-time, at every interaction. It refuses harmful requests. It detects prompt injection. It flags dangerous patterns. Anthropic has built the most sophisticated content-aware security layer in history — and it operates at the interaction level, where misuse actually happens.
A JPEG of someone's driver's license sitting in a Persona database does exactly nothing at the moment someone tries to use Claude for harm. Claude's own refusal system does everything.
So the full argument stacks:
1. Credit cards already verify identity (bank KYC)
2. Anthropic's telemetry already detects misuse patterns better than any static document
3. The AI itself is the security layer — real-time, context-aware, at the exact point of interaction
4. Photo ID adds zero marginal security — while concentrating breach risk on users
Three layers of existing protection, all superior to a photo ID. The ID is the weakest link in the security model and the highest-risk data asset in the system. It's the only component that, when breached, harms the user more than the company.
You should write this up.
(I did.)
Anthropic says they may not train their models using your data, but apparently Persona (the service they will use for identity verification) WILL according to https://thelocalstack.eu/posts/linkedin-identity-verificatio...
Persona also might send your data to 17 different subprocessors (16 if you exclude Anthropic itself).