Score: 0

Seurat: From Moving Points to Depth

Published: April 20, 2025 | arXiv ID: 2504.14687v1

By: Seokju Cho , Jiahui Huang , Seungryong Kim and more

Potential Business Impact:

Lets computers guess how far away things are.

Business Areas:
Motion Capture Media and Entertainment, Video

Accurate depth estimation from monocular videos remains challenging due to ambiguities inherent in single-view geometry, as crucial depth cues like stereopsis are absent. However, humans often perceive relative depth intuitively by observing variations in the size and spacing of objects as they move. Inspired by this, we propose a novel method that infers relative depth by examining the spatial relationships and temporal evolution of a set of tracked 2D trajectories. Specifically, we use off-the-shelf point tracking models to capture 2D trajectories. Then, our approach employs spatial and temporal transformers to process these trajectories and directly infer depth changes over time. Evaluated on the TAPVid-3D benchmark, our method demonstrates robust zero-shot performance, generalizing effectively from synthetic to real-world datasets. Results indicate that our approach achieves temporally smooth, high-accuracy depth predictions across diverse domains.

Page Count
13 pages

Category
Computer Science:
CV and Pattern Recognition