BokehDepth: Enhancing Monocular Depth Estimation through Bokeh Generation
By: Hangwei Zhang , Armando Teles Fortes , Tianyi Wei and more
Potential Business Impact:
Makes blurry photos show depth better.
Bokeh and monocular depth estimation are tightly coupled through the same lens imaging geometry, yet current methods exploit this connection in incomplete ways. High-quality bokeh rendering pipelines typically depend on noisy depth maps, which amplify estimation errors into visible artifacts, while modern monocular metric depth models still struggle on weakly textured, distant and geometrically ambiguous regions where defocus cues are most informative. We introduce BokehDepth, a two-stage framework that decouples bokeh synthesis from depth prediction and treats defocus as an auxiliary supervision-free geometric cue. In Stage-1, a physically guided controllable bokeh generator, built on a powerful pretrained image editing backbone, produces depth-free bokeh stacks with calibrated bokeh strength from a single sharp input. In Stage-2, a lightweight defocus-aware aggregation module plugs into existing monocular depth encoders, fuses features along the defocus dimension, and exposes stable depth-sensitive variations while leaving downstream decoder unchanged. Across challenging benchmarks, BokehDepth improves visual fidelity over depth-map-based bokeh baselines and consistently boosts the metric accuracy and robustness of strong monocular depth foundation models.
Similar Papers
BokehFlow: Depth-Free Controllable Bokeh Rendering via Flow Matching
CV and Pattern Recognition
Makes photos blurry where you want them.
BokehDiff: Neural Lens Blur with One-Step Diffusion
CV and Pattern Recognition
Makes blurry photos look real and clear.
BoRe-Depth: Self-supervised Monocular Depth Estimation with Boundary Refinement for Embedded Systems
CV and Pattern Recognition
Helps robots see in 3D with clear edges.