It's not the comment which is illogical, it's your (mis)interpretation of it. What I (and seemingly others) took it to mean is basically could an LLM do Einstein's job? Could it weave together all those loose threads into a coherent new way of understanding the physical world? If so, AGI can't be far behind.
Einstein is not AGI, and neither the other way around.
AGI is human level intelligence, and the minimum bar is Einstein?
This alone still wouldn't be a clear demonstration that AGI is around the corner. It's quite possible a LLM could've done Einstein's job, if Einstein's job was truly just synthesising already available information into a coherent new whole. (I couldn't say, I don't know enough of the physics landscape of the day to claim either way.)
It's still unclear whether this process could be merely continued, seeded only with new physical data, in order to keep progressing beyond that point, "forever", or at least for as long as we imagine humans will continue to go on making scientific progress.