I just cancelled my plan, but still have access to Pro and Code apparently until my cycle would have renewed. Hopefully they get a clear signal from this, especially if more of us cancel with the intention to sign back up should they reverse this decision.
“Let’s see how they react, and they will be ok and we will be rich.”
The enshitification intensifies.
They're trying to find every way to enshittify their partially unprofitable service. When they find a way that sticks, they'll go with it. This has become the preferred way of doing tech business in the US. Create a great thing, give it away for free, hook users in, try to squeeze them. In theory competition should limit this kind of behaviour, but for some reason they big companies all wait on another to start enshittification in unison. How this is legal still puzzles me but evidently that's how it goes.
Here's my hot take: Anthropic et al. are trying to make developing a subscription-only job, and they've done that by illegally pirating pretty much the whole Internet. If they were to go out of business tomorrow and serving models was to become a commoditized service like storage we'd be all better off. Sure, we would have less research on frontier models, but we don't need AGI, we need good local models, RAM and good open source / weight AI tools.
[dead]
[dead]
[dead]
[flagged]
I posted this question two weeks ago: "What is your plan when the AI you have implemented throughout your company changes the results you've come to trust?" (https://www.theregister.com/2026/04/06/anthropic_claude_code...).
Since then, I had to add:
"or won't let you log in?": https://github.com/anthropics/claude-code/issues/44257
"or makes stuff up?": https://dwyer.co.za/static/claude-mixes-up-who-said-what-and...
"or when it's down?": https://status.claude.com/incidents/6jd2m42f8mld
"or when you get banned?": https://bannedbyanthropic.com/
"or installs spyware?": https://www.thatprivacyguy.com/blog/anthropic-spyware/
And this is all exclusively about Anthropic. It's insane. On any other tech, there would be a consensus to wait until it's stable, but not AI - we go full throttle when it's AI.
Genuinely curious how people who have implemented this in serious companies are answering these questions, because my answer is to keep it the fuck out.
Saw this coming eventually. $20/month for autonomous agents running 24/7 was clearly not sustainable at API pricing. The part that's surprising is there's still no official announcement - just a quiet page edit.
Wonder where this leaves folks who paid the annual rate? Here’s what Claude said:
https://claude.ai/share/1a4293bd-b2d4-41b7-a887-eb42b3ae8b6e
“ The standard answer here is no — Anthropic does not typically refund the unused portion of annual plans , and annual subscribers won’t see prorated refunds, retaining access for the full remaining period instead. That said, your situation is a bit different — you’re not just canceling, you’re canceling because a feature you paid for was removed. That’s worth contacting Anthropic support directly about. Their support team can check your refund eligibility , and this kind of material change to the plan is exactly the case where a support escalation could go differently than a standard cancellation. You can reach them through the in-app support messenger at support.claude.com or via the thumbs-down feedback button. I’d recommend explaining specifically that Claude Code was a factor in your annual plan purchase. ”
Meh, $20 Codex is better at this moment anyway.