logoalt Hacker News

Veedractoday at 2:04 AM2 repliesview on HN

> From what I've seen, models have hit a plateau where code generation is pretty good...

> But it's not improving like it did the past few years.

As opposed to... what? The past few months? Has AI progress so broken our minds as to make us stop believing in the concept of time?


Replies

Aurornistoday at 2:26 AM

I see these claims in a lot of anti-LLM content, but I’m equally puzzled. The pace of progress feels very fast right now.

There is some desire to downplay or dismiss it all, as if the naysayers are going to get their “told you so” moment and it’s just around the corner. Yet the goalposts for that moment just keep moving with each new release.

It’s sad that this has turned into a culture war where you’re supposed to pick a side and then blind yourself to any evidence that doesn’t support your chosen side. The vibecoding maximalists do the same thing on the other side of this war, but it’s getting old on both sides.

martinaldtoday at 2:07 AM

Yes a strange comment. Opus 4.5 is significantly better than before and Opus 4.6 is even better. Same with the 5.2 and 5.3 Codex models.

If anything, the pace has increased.

This may be one of the most important graphs to keep an eye on: https://metr.org/ and it tracks well to my anecdotal experience.

You can see the industry did hit a bit of a wall in 2024 where the improvements drop below the log trend. However, in 2025 the industry is significantly _above_ the trend line.