logoalt Hacker News

Bombthecatyesterday at 4:20 PM3 repliesview on HN

I doubt that we will hit diminishing returns in AI. We still find new ways to make them faster or cheaper or better or even train themselves...

The flat line prediction is now 2 years old...


Replies

eikenberryyesterday at 6:59 PM

I thought the prediction was that the scaling of LLMs making them better would plateau, not that all advancement would stop? And that has pretty much happened as all the advancements over the last year or more have been architectural, not from scaling up.

aaronblohowiakyesterday at 4:34 PM

Feels like top of s curve lately

sfn42yesterday at 10:44 PM

You say that, but to me they seem roughly the same as they've been for a good while. Wildly impressive technology, very useful, but also clearly and confidently incorrect a lot. Most of the improvement seems to have come from other avenues - search engine integration, image processing (still blows my mind every time I send a screenshot to a LLM and it gets it) and stuff like that.

Sure maybe they do better in some benchmarks, but to me the experience of using LLMs is and has been limited by their tendency to be confidently incorrect which betrays their illusion of intelligence as well as their usefulness. And I don't really see any clear path to getting past this hurdle, I think this may just be about as good as they're gonna get in that regard. Would be great if they prove me wrong.