Tried a few random images and scenes, overall wasn't that impressive. Maybe I'm using the wrong kinds of input images or something, but for the most part once I moved more than a small amount, the rendering was mostly noise. To be fair, I didn't really expect much more.
Neat demo, but feels like things need to come quite a ways to make this interesting.
Cool, is there a way to upload several photos of a room from different angles to fuse it all together? Is there an api?
Would be useful to have the website say something, _anything_ about what this is doing besides asking you to upload an image.
This is just Apple's tool plus a splat viewing library? Perhaps disingenuous to call "our web app"
This is the heavy lifting: https://github.com/apple/ml-sharp
Previous discussion: https://news.ycombinator.com/item?id=46284658
Gets stuck at 84% each time - seems wasteful to let it get that far!
Thrown 2 images didn't nothing just a error
Conveniently fails to start processing
Its funny, always stucks on 90% till it fails with the error that another big image may be keeping the server busy.
I mean ok its a "demo" tho the funny thing is if you actually check the cli and requests, you clearly can see that the 3 stages the images walks through on "processing" are fake, its just doing 1 post request in the backend that runs while it traverses through the states, and at 90% it stops until (in theory) the request ends.
Or one-click install on your own device: https://pinokio.co/item.html?uri=https%3A%2F%2Fgithub.com%2F...
If this model is so good at estimating depth from single image, shouldn't it also be able to take multiple images as input and estimate even better? But searching a bit it looks like this is supposed to be a single image to 3D only. I don't understand why it does not (can not?) work with multiple images.