Given the mechanistic interpretability findings? I'm not sure how people still say shit like "no real world model" seriously.
They have a _text_ model. There is some correlation between the text model and the world, but it’s loose and only because there’s a lot of text about the world. And of course robotics researchers are having to build world models, but these are far from general. If they had a real world model, I could tell them I want to play a game of chess and they would be able to remember where the pieces are from move to move.
People are finding it hard to grasp emergent properties can appear at very large scales and dimensions.
People just overstate their understanding and knowledge, the usual human stuff. The same user has a comment in this thread that contains:
'If you actually know what models are doing under the hood to product output that...'
Any one that tells you they know 'what models are dong under the hood' simply has no idea what they're talking about, and it's amazing how common this is.