BokehDiff: Neural Lens Blur with One-Step Diffusion
By: Chengxuan Zhu , Qingnan Fan , Qi Zhang and more
Potential Business Impact:
Makes blurry photos look real and clear.
We introduce BokehDiff, a novel lens blur rendering method that achieves physically accurate and visually appealing outcomes, with the help of generative diffusion prior. Previous methods are bounded by the accuracy of depth estimation, generating artifacts in depth discontinuities. Our method employs a physics-inspired self-attention module that aligns with the image formation process, incorporating depth-dependent circle of confusion constraint and self-occlusion effects. We adapt the diffusion model to the one-step inference scheme without introducing additional noise, and achieve results of high quality and fidelity. To address the lack of scalable paired data, we propose to synthesize photorealistic foregrounds with transparency with diffusion models, balancing authenticity and scene diversity.
Similar Papers
Bokeh Diffusion: Defocus Blur Control in Text-to-Image Diffusion Models
Graphics
Makes AI pictures look like real photos.
BokehDepth: Enhancing Monocular Depth Estimation through Bokeh Generation
CV and Pattern Recognition
Makes blurry photos show depth better.
BokehFlow: Depth-Free Controllable Bokeh Rendering via Flow Matching
CV and Pattern Recognition
Makes photos blurry where you want them.