I still don't get what LGM is. From what I understood, it isn't actually about any "geospatial" data at all, is it? It is rather about improving some vision models to predict how the backside of a building looks, right? And training data isn't of people walking, but from images they've produced while catching pokemons or something?
P.S.: Also, if that's indeed what they mean, I wonder why having google street view data isn't enough for that.
> It is rather about improving some vision models to predict how the backside of a building looks, right?
This, yes, based on how the backsides of similar buildings have looked in other learned areas.
But the other missing piece of what it is seems to be relativity and scale: I do 3D model generation at our game studio right now and the biggest want/need current models can't do is scale (and, specifically, relative scale) -- we can generate 3d models for entities in our game but we still need a person in the loop to scale them to a correct size relative to other models: trees are bigger than humans, and buildings are bigger still. Current generative 3d models just create a scale-less model for output; it looks like a "geospatial" model incorporates some form of relative scale, and would (could?) incorporate that into generated models (or, more likely, maps of models rather than individual models themselves).
The ultimate goal is to use the phone camera to get very accurate mapping and position. They're able to merge images from multiple sources which means they're able to localize an image against their database, at least relatively.
> And training data isn't of people walking, but from images they've produced while catching pokemons or something?
Training data is people taking dedicated video of locations. Only ARCore supported devices can submit data as well. So I assume along with the video they're also collecting a good chunk of other data such as depth maps, accelerometer, gyrometer, magnetometer data, GPS, and more.