Trajectory Densification and Depth from Perspective-based Blur
By: Tianchen Qiu , Qirun Zhang , Jiajian He and more
Potential Business Impact:
Lets cameras see depth without special parts.
In the absence of a mechanical stabilizer, the camera undergoes inevitable rotational dynamics during capturing, which induces perspective-based blur especially under long-exposure scenarios. From an optical standpoint, perspective-based blur is depth-position-dependent: objects residing at distinct spatial locations incur different blur levels even under the same imaging settings. Inspired by this, we propose a novel method that estimate metric depth by examining the blur pattern of a video stream and dense trajectory via joint optical design algorithm. Specifically, we employ off-the-shelf vision encoder and point tracker to extract video information. Then, we estimate depth map via windowed embedding and multi-window aggregation, and densify the sparse trajectory from the optical algorithm using a vision-language model. Evaluations on multiple depth datasets demonstrate that our method attains strong performance over large depth range, while maintaining favorable generalization. Relative to the real trajectory in handheld shooting settings, our optical algorithm achieves superior precision and the dense reconstruction maintains strong accuracy.
Similar Papers
Seurat: From Moving Points to Depth
CV and Pattern Recognition
Lets computers guess how far away things are.
Parameter-Free Neural Lens Blur Rendering for High-Fidelity Composites
CV and Pattern Recognition
Adds realistic blur to virtual objects in photos.
Depth-Consistent 3D Gaussian Splatting via Physical Defocus Modeling and Multi-View Geometric Supervision
CV and Pattern Recognition
Makes 3D pictures more real, near and far.