But actual progress seems to be slower. These modes are releasing more often but aren’t big leaps.
Due to the increasing difficulty of scaling up training, it appears the gains are instead being achieved through better model training which appears to be working well for everyone.
GPT 5.3 (/Codex) was a huge leap over 5.2 for coding
We used to get one annual release which was 2x as good, now we get quarterly releases which are 25% better. So annually, we’re now at 2.4x better.