This is gonna be game-changing for the next 2-4 weeks before they nerf the model.
Then for the next 2-3 months people complaining about the degradation will be labeled “skill issue”.
Then a sacrificial Anthropic engineer will “discover” a couple obscure bugs that “in some cases” might have lead to less than optimal performance. Still largely a user skill issue though.
Then a couple months later they’ll release Opus 4.7 and go through the cycle again.
My allegiance to these companies is now measured in nerf cycles.
I’m a nerf cycle customer.
Interestingly, I canceled my Claude subscription. I've paid through the first week of December, so it dries up on the 7th of December. As soon as I had canceled, Claude Code started performing substantially better. I gave it a design spec (a very loose design spec) and it one-shotted it. I'll grant that it was a collection of docker containers and a web API, but still. I've not seen that level of performance from Claude before, and I'm thinking I'll have to move to 'pay as you go' (pay --> cancel immediately) just to take advantage of this increased performance.
With Claude specifically I've grown confident they have been sneakily experimenting with context compression to save money and doing a very bad job at it. However for this same reason one shot batch usage or one off questions & answers that don't depend on larger context windows don't seem to see this degradation.
This is why I migrated my apps that need an LLM to Gemini. No model degradation so far all through the v2.5 model generation. What is Anthropic doing? Swapping for a quantized version of the model?
100%. They've been nerfing the model periodically since at least Sonnet 3.5, but this time it's so bad I ended up swapping out to GLM4.6 just to finish off a simple feature.
Hilarious sarcastic comment but actually true sentiment.
For all we know this is just the Opus 4.0 re-released
Thank god people are noticing this. I'm pretty sick of companies putting a higher number next to models and programmers taking that at face value.
This reminds me of audio production debates about niche hardware emulations, like which company emulated the 1176 compressor the best. The differences between them all are so minute and insignificant, eventually people just insist they can "feel" the difference. Basically, whoever is placeboing the hardest.
Such is the case with LLMs. A tool that is already hard to measure because it gives different output with the same repeated input, and now people try to do A/B tests with models that are basically the same. The field has definitely made strides in how small models can be, but I've noticed very little improvement since gpt-4.
I fully agree that this is what's happening. I'm quite convinced after about a year of using all these tools via the "pro" plans that all these companies are throttling their models in sophisticated ways that have a poorly understood but significant impact on quality and consistency.
Gpt-5.1-* are fully nerfed for me at the moment. Maybe they're giving others the real juice but they're not giving it to me. Gpt-5-* gave me quite good results 2 weeks ago, now I'm just getting incoherent crap at 20 minute intervals.
Maybe I should just start paying via tokens for a hopefully more consistent experience.
There are two possible explanations for this behavior: the model nerf is real, or there's a perceptual/psychological shift.
However, benchmarks exist. And I haven't seen any empirical evidence that the performance of a given model version grows worse over time on benchmarks (in general.)
Therefore, some combination of two things are true:
1. The nerf is psychologial, not actual. 2. The nerf is real but in a way that is perceptual to humans, but not benchmarks.
#1 seems more plausible to me a priori, but if you aren't inclined to believe that, you should be positively intrigued by #2, since it points towards a powerful paradigm shift of how we think about the capabilities of LLMs in general... it would mean there is an "x-factor" that we're entirely unable to capture in any benchmark to date.