logoalt Hacker News

Bombthecat01/15/20264 repliesview on HN

I doubt that we will hit diminishing returns in AI. We still find new ways to make them faster or cheaper or better or even train themselves...

The flat line prediction is now 2 years old...


Replies

riknos31401/16/2026

Many things that look exponential originally turn out to actually be sigmoidal.

I consider the start of this wave of AI to be approximately the 2017 Google transformer paper and yet transformers didn't really have enough datapoints to look exponential until GPT 3 in 2022.

The following is purely speculation for fun and sparking light-hearted conversation:

My gut feeling is that this generation of models transitioned out of the part of the sigmoid that looks roughly exponential after the introduction of reasoning models.

My prediction is that tranformer-based models will start to enter the phase that asymptotes to flatline in 1-2 years.

I leave open the possibility for a different form of model to emerge that is exponential but I don't believe transformers to be right now.

aaronblohowiak01/15/2026

Feels like top of s curve lately

eikenberry01/15/2026

I thought the prediction was that the scaling of LLMs making them better would plateau, not that all advancement would stop? And that has pretty much happened as all the advancements over the last year or more have been architectural, not from scaling up.

sfn4201/15/2026

You say that, but to me they seem roughly the same as they've been for a good while. Wildly impressive technology, very useful, but also clearly and confidently incorrect a lot. Most of the improvement seems to have come from other avenues - search engine integration, image processing (still blows my mind every time I send a screenshot to a LLM and it gets it) and stuff like that.

Sure maybe they do better in some benchmarks, but to me the experience of using LLMs is and has been limited by their tendency to be confidently incorrect which betrays their illusion of intelligence as well as their usefulness. And I don't really see any clear path to getting past this hurdle, I think this may just be about as good as they're gonna get in that regard. Would be great if they prove me wrong.

show 1 reply