I think it's really poor argument that AGI won't happen because model doesn't understand physical world. That can be trained the same way everything else is.
I think the biggest issue we currently have is with proper memory. But even that is because it's not feasible to post-train an individual model on its experiences at scale. It's not a fundamental architectural limitation.
When people move the goal posts for AGI toward a physical state, they are usually doing it so they can continue to raise more funding rounds at a higher valuation. Not saying the author is doing that.