A new pretrain would definitely get more than a .1 version bump & would get a whole lot more hype I'd think. They're expensive to do!
Not necessarily. GPT-4.5 was a new pretrain on top of a sizeable raw model scale bump, and only got 0.5 - because the gains from reasoning training in o-series overshadowed GPT-4.5's natural advantage over GPT-4.
OpenAI might have learned not to overhype. They already shipped GPT-5 - which was only an incremental upgrade over o3, and was received poorly, with this being a part of the reason why.
Maybe they felt the increase in capability is not worth of a bigger version bump. Additionally pre-training isn't as important as it used to be. Most of the advances we see now probably come from the RL stage.
Not if they didn't feel that it delivered customer value no? It's about under promising and over delivering, in every instance
It’s possible they’re using some new architecture to get more up-to-date data, but I think that’d be even more of a headline.
My hunch is that this is the same 5.1 post-training on a new pretrained base.
Likely rushed out the door faster than they initially expected/planned.
Yeah because OpenAI has been great at naming their models so far? ;)
Maybe the rumors about failed training runs weren't wrong...
Not if it underwhelms
Releasing anything as "GPT-6" which doesn't provide a generational leap in performance would be a PR nightmare for them, especially after the underwhelming release of GPT-5.
I don't think it really matters what's under the hood. People expect model "versions" to be indexed on performance.