Depth of Field Aware Differentiable Rendering
DescriptionCameras with a finite aperture diameter exhibit defocus for scene elements that are not at the focus distance and have only a limited depth of field within which objects appear acceptably sharp. In this work, we address the problem of applying inverse rendering techniques to input data that exhibits such defocus blurring. We present differentiable depth-of-field rendering techniques that are applicable to both rasterization-based methods using mesh representations, as well as ray-marching-based methods using volumetric radiance fields. Our approach learns significantly sharper scene reconstructions on data containing blur due to depth of field and recovers aperture and focus distance parameters that result in plausible forward-rendered images. We show applications to macro photography, where typical lens configurations result in a very narrow depth of field, and to multi-camera video capture, where maintaining sharp focus across a large capture volume for a moving subject is difficult.
Event Type
Technical Papers
TimeTuesday, 6 December 202210:00am - 12:00pm KST
Registration Categories