logoalt Hacker News

rubicon33yesterday at 5:04 PM3 repliesview on HN

But actual progress seems to be slower. These modes are releasing more often but aren’t big leaps.


Replies

gallerdudeyesterday at 5:25 PM

We used to get one annual release which was 2x as good, now we get quarterly releases which are 25% better. So annually, we’re now at 2.4x better.

minimaxiryesterday at 5:34 PM

Due to the increasing difficulty of scaling up training, it appears the gains are instead being achieved through better model training which appears to be working well for everyone.

wahnfriedenyesterday at 5:19 PM

GPT 5.3 (/Codex) was a huge leap over 5.2 for coding

show 1 reply