First, I don't think we will ever get to AGI. Not because we won't see huge advances still, but AGI is a moving ambiguous target that we won't get consensus on.
But why does this paper impact your thinking on it? It is about budget and recognizing that different LLMs have different cost structures. It's not really an attempt to improve LLM performance measured absolutely.
Given OpenAI definition I’d expect AGI to be around in a decade or two. I don’t expect skynet, though maybe it’s a more realistic vision outcome that just droids mixing with humans.
So you don't expect AGI to be possible ever? Or is your concern mainly with the wildly different definitions people use for it and that we'll continue moving goal posts rather than agree we got there?
I can totally see "it's not really AGI because it doesn't consistently outperform those three top 0.000001% outlier human experts yet if they work together".
It'll be a while until the ability to move the goalposts of "actual intelligence" is exhausted entirely.