As a machine learning researcher, I don't get why these are called world models.
Visually, they are stunning. But it's nowhere near physical. I mean look at that video with the girl and lion. The tail teleports between legs and then becomes attached to the girl instead of the tiger.
Just because the visuals are high quality doesn't mean it's a world model or has learned physics. I feel like we're conflating these things. I'm much happier to call something a world model if its visual quality is dogshit but it is consistent with its world. And I say its world because it doesn't need to be consistent with ours
None of these examples videos seem like the kind of “experiments” that they’re talking about simulating with these models.
I was expecting them to test a simple hypothesis and compare the model results to a real world test
For a minute I was like (spoiler alert) « wow the creepy sci-fi theories from the DEVS tv show is taking place »… then I looked up the video and that’s just video generation at this point
This looks interesting, but can someone explain to me how this is different from video generators using the previous frames as inputs to expand on the next frame?
Is this more than recursive video? If so, how?
This appears to be a simulator that produces only nice things.
I can't wait for companies like this to run out of money
I'm doing a metasim in full 3D with physics, I just keep seeing the limitations of the video format too much, but it is amazing when done right. The other biggest concern is licensing of output.
The reason they are called "world models" is because the internal representation of what they display represents a "world" instead of a video frame or image. The model needs to "understand" geometry and physics to output a video.
Just because there are errors in this doesn't mean it isn't significant. If a machine learning model understands how physical objects interact with each other that is very useful.