Yeah, Qwen3 coder for Claude Code and 3.5 for OpenClaw replaced my full-stack use of Opus 4.6 already; it's fine for basic web apps, k8s/docker infra setup, optimizing AI models etc. with only slightly higher error rate than Opus. Upcoming 3.6 together with Gemma4 might make it even better (still to test). OpenAI's memory spot market play might have been directed at local inference as well.
Look for Deepseek 4 when it drops, I’m curious how good it will be.
The thing is, if you’re using AI responsibly today you’re already breaking down tasks to such a granular level that you don’t need the power of Opus. You can save that for deeper research tasks.