At least for me, the core criticism of AI 2027 was always that it was an extremely simplistic "number go up, therefore AGI", with some nice fiction-y words around it.
The scribble model kind-of hints at what a better forecast would've done - you start from the scribbles and ask "what would it take to get that line, and how'd we get there". And I love that the initial set of scribbles will, amongst other things, expose your biases. (Because you draw the set of scribbles that seems plausible to you, a priori)
The fact that it can both guide you towards exploring alternatives and exposing biases, while being extremely simple - marvellous work.
Definitely going to incorporate this into my reasoning toolkit!
To me, 2027 looks like a case of writing the conclusion first and then trying to explain backwards how it happens.
If everything goes "perfectly", then the logic works (to an extent, but the increasing rate of returns is a suspicious assumption baked into it).
But everything must go perfectly to do that, including all the productivity multipliers being independent and the USA deciding to take this genuinely seriously (not fake seriously in the form of politicians saying "we're taking this seriously" and not doing much), and therefore no-expenses-spared rush the target like it's actually an existential threat. I see no way this would be a baseline scenario.