To be honest, I expected the punchline to be about how randomly drawing lines is the same nonsense as using simplistic mathematical modeling without considering the underlying phenomenon. But the punchline never came.
Predicting AI is more or less impossible because we have no idea about the its properties. With other technologies, we can reason about how small or how how a component can get and this gives us psychical limitations that we can observe. With AI we throw in data and we are or we are not surprised by the behavior the model exhibits. With a few datapoints we have, it seems that more compute and more data usually lead to better performance, but that is more or less everything we can say about it, there is no theory behind it that would guarantee us the gains for the next 10x.