Anyone tried using lidar and just cut/measure distance to the object?
That would require calibration with the camera, and even then the camera and lidar sensor can’t be in exactly the same place. I doubt results would be better.
Well sort of, the industry tried to go way beyond that by capturing the entire light field: https://techcrunch.com/2016/04/11/lytro-cinema-is-giving-fil...
per pixel depth does not solve for semi-transparency.
Apparently they used something similar for production on avatar: stereo cameras for depth estimation which allowed realtime depth composition of CG characters onto the shots they were taking, which makes it a lot easier to get everyone on the same page about the scene, especially with characters that are outside normal human proportions. But it wasn't good enough for the final shots.