Seurat: From Moving Points to Depth
By: Seokju Cho , Jiahui Huang , Seungryong Kim and more
Potential Business Impact:
Lets computers guess how far away things are.
Accurate depth estimation from monocular videos remains challenging due to ambiguities inherent in single-view geometry, as crucial depth cues like stereopsis are absent. However, humans often perceive relative depth intuitively by observing variations in the size and spacing of objects as they move. Inspired by this, we propose a novel method that infers relative depth by examining the spatial relationships and temporal evolution of a set of tracked 2D trajectories. Specifically, we use off-the-shelf point tracking models to capture 2D trajectories. Then, our approach employs spatial and temporal transformers to process these trajectories and directly infer depth changes over time. Evaluated on the TAPVid-3D benchmark, our method demonstrates robust zero-shot performance, generalizing effectively from synthetic to real-world datasets. Results indicate that our approach achieves temporally smooth, high-accuracy depth predictions across diverse domains.
Similar Papers
Trajectory Densification and Depth from Perspective-based Blur
CV and Pattern Recognition
Lets cameras see depth without special parts.
Depth as Points: Center Point-based Depth Estimation
CV and Pattern Recognition
Helps self-driving cars see better and faster.
CylinderDepth: Cylindrical Spatial Attention for Multi-View Consistent Self-Supervised Surround Depth Estimation
CV and Pattern Recognition
Makes 3D pictures from many cameras match perfectly.