Efficient Depth- and Spatially-Varying Image Simulation for Defocus Deblur
By: Xinge Yang , Chuong Nguyen , Wenbin Wang and more
Potential Business Impact:
Makes cameras focus on anything, even smart glasses.
Modern cameras with large apertures often suffer from a shallow depth of field, resulting in blurry images of objects outside the focal plane. This limitation is particularly problematic for fixed-focus cameras, such as those used in smart glasses, where adding autofocus mechanisms is challenging due to form factor and power constraints. Due to unmatched optical aberrations and defocus properties unique to each camera system, deep learning models trained on existing open-source datasets often face domain gaps and do not perform well in real-world settings. In this paper, we propose an efficient and scalable dataset synthesis approach that does not rely on fine-tuning with real-world data. Our method simultaneously models depth-dependent defocus and spatially varying optical aberrations, addressing both computational complexity and the scarcity of high-quality RGB-D datasets. Experimental results demonstrate that a network trained on our low resolution synthetic images generalizes effectively to high resolution (12MP) real-world images across diverse scenes.
Similar Papers
Fine-grained Defocus Blur Control for Generative Image Models
CV and Pattern Recognition
Makes pictures blurry like real cameras.
Depth-Aware Super-Resolution via Distance-Adaptive Variational Formulation
CV and Pattern Recognition
Makes blurry pictures clearer, even far away.
Examining the Impact of Optical Aberrations to Image Classification and Object Detection Models
CV and Pattern Recognition
Makes computer vision better at seeing blurry pictures.