They don't need to catch up. They just need to be good enough and fast as fuck. Vast majority of useful tasks of LLMs has nothing to do with how smart they are.
GPT-5 models have been the most useless models out of any model released this year despite being SOTA, and it because it slow as fuck.
> just need to be good enough and fast as fuck
Hard disagree. There are very few scenarios where I'd pick speed (quantity) over intelligence (quality) for anything remotely to do with building systems.
> They don't need to catch up. They just need to be good enough
The current SOTA models are impressive but still far from what I’d consider good enough to not be a constant exercise in frustration. When the SOTA models still have a long way to go, the open weights models have an even further gap distance to catch up.
GPT 5 Codex is great - the best coding model around except maybe for Opus.
I'd like more speed but prefer more quality than more speed.
I get GPT 5.2 responses on copilot faster than for any other model, almost instantly. Are you sure they’re slow as fuck?
Confused. Is ‘fuck’ fast or slow? Or both at the same time? Is there a sort of quantum superposition of fuck?
This. You can distill a foundation model into open source. The Chinese will be doing this for us for a long time.
We should be glad that the foundation model companies are stuck running on treadmills. Runaway success would be bad for everyone else in the market.
Let them sweat.
Bullseye.
For coding I don’t use any of the previous gen models anymore.
Ideally I would have both fast and SOTA; if I would have to pick one I’d go with SOTA.
There a report by OpenRouter on what folks tend to pay for it; it generally is SOTA in the coding domain. Folks are still paying a premium for them today.
There is a question if there is a bar where coding models are “good enough”; for myself I always want smarter / SOTA.