Paper has some more useful examples:
https://imaging.cs.cmu.edu/svaf/static/pdfs/Spatially_Varyin...
It is a new neat idea to selectively adjust focus distance for different regions of the scene!
- processing: while there is no post processing, it needs scene depth information which requires pre computation, segmentation and depth estimation. Not a one-shot technique and quality depends on computational depth estimates being good
- no free lunch. The optical setup needs to trade in some light for this cool effect to work. Apart from the limitations of the prototype, how much loss is expected in theory? How does this compare to a regular camera setup with lower aperture? F/36 seems excessive for comparison.
- resolution - what resolutions have been achieved? (maybe not the 12 MPixels of the sensor? For practical or theoretical reasons? ) What depth range can the prototype capture? "photo of Paris Arc de triumphe displayed on a screen". This is suspiciously omitted
- how does the bokeh look like when out of focus? At the edge of an object? The introduction of weird or unnatural artifacts would seriously limit the acceptance
Don't get me wrong - nice technique! But to my liking the paper is omitting fundamental properties
As soon as I saw the headline, I began thinking about microphotography- no more blurry microbes! I could get excited for something like this.
I wonder if this camera might somehow record depth information, or be modified to do such a thing.
That would make it really useful, maybe replacing carmera+lidar.
I also like my 3d games without DOF.
How is this different from using a small aperture size?
When you reduce aperture size the depth of field increases. So for example when you use f/16 pretty much everything from a few feet to infinity is in focus.