logoalt Hacker News

numpad0today at 2:12 AM1 replyview on HN

A LIDAR point cloud from a single point of view is a mono-ocular depth map. Unless the LIDAR in question is like, using supernova level gamma rays or neutrino generators for the laser part to get density and albedo volumetric data for its whole distance range.

You just can't see the back of a thing by knowing the shape of the front side with current technologies.


Replies

itishappytoday at 3:26 PM

Right! My terminology may be imprecise here, but I believe there is still an important distinction:

The depth map stored for image processing is image metadata, meaning it calculates one depth per pixel from a single position in space. Note that it doesn't have the ability to measure that many depth values, so it measures what it can using LIDAR and focus information and estimates the rest.

On the other hand, a point cloud is not image data. It isn't necessarily taken from a single position, in theory the device could be moved around to capture addition angles, and the result is a sparse point cloud of depth measurements. Also, raw point cloud data doesn't necessarily come tagged with point metadata such as color.

I also note that these distinctions start to vanish when dealing with video or using more than one capture device.