logoalt Hacker News

PollardsRhoyesterday at 6:40 PM1 replyview on HN

> Given consistent trends of exponential performance improvements over many years and across many industries, it would be extremely surprising if these improvements suddenly stopped.

This is the part I find very strange. Let's table the problems with METR [1], just noting that benchmarking AI is extremely hard and METR's methodology is not gospel just because METR's "sole purpose is to study AI capabilities". (That is not a good way to evaluate research!)

Taking whatever idealized metric you want, at some point it has to level off. That's almost trivially true: everyone should agree that unrestricted exponential growth forever is impossible, if only for the eventual heat death of the universe. That makes the question when, and not if. When do external forces dominate whatever positive feedback loops were causing the original growth? In AI, those positive feedback loops include increased funding, increased research attention and human capital, increased focus on AI-friendly hardware, and many others, including perhaps some small element of AI itself assisting the research process that could become more relevant in the future.

These positive feedback loops have happened many times, and they often do experience quite sharp level-offs as some external factor kicks in. Commercial aircraft speeds experienced a very sharp increase until they leveled off. Many companies grow very rapidly at first and then level off. Pandemics grow exponentially at first before revealing their logistic behavior. Scientific progress often follows a similar trajectory: a promising field emerges, significant increased attention brings a bevy of discoveries, and as the low-hanging fruit is picked the cost of additional breakthroughs surges and whatever fundamental limitations the approach has reveal themselves.

It's not "extremely surprising" that COVID did not infect a trillion people, even though there are some extremely sharp exponentials you can find looking at the first spread in new areas. It isn't extremely surprising that I don't book flights at Mach 3, or that Moore's Law was not an ironclad law of the universe.

Does that mean the entire field will stop making any sort of progress? Of course not. But any analysis that fundamentally boils down to taking a (deeply flawed) graph and drawing a line through it and simplifying the whole field of AI research to "line go up" is not going to give you well-founded predictions for the future.

A much more fruitful line of analysis, in my view, is to focus on the actual conditions and build a reasonable model of AI progress that includes current data while building in estimations of sigmoidal behavior. Does training scaling continue forever? Probably not, given the problems with e.g., GPT-4.5 and the limited amount of quality non-synthetic training data. It's reasonable to expect synthetic training data to work better over time, and it's also reasonable to expect the next generation of hardware to also enable an additional couple orders of magnitude. Beyond that, especially if the money runs out, it seems like scaling will hit a pretty hard wall barring exceptional progress. Is inference hardware going to get better enough that drastically increased token outputs and parallelism won't matter? Probably not, but you can definitely forecast continued hardware improvements to some degree. What might a new architectural paradigm be for AI, and would that have significant improvements over current methodology? To what degree is existing AI deployment increasing the amount of useful data for AI training? What parts of the AI improvement cycle rely on real-world tasks that might fundamentally limit progress?

That's what the discussion should be, not reposting METR for the millionth time and saying "line go up" the way people do about Bitcoin.

[1] https://www.transformernews.ai/p/against-the-metr-graph-codi...


Replies

neomyesterday at 6:45 PM

"everyone should agree that unrestricted exponential growth forever is impossible, if only for the eventual heat death of the universe." - why is this a good/useful framing?

show 1 reply