The Dynamic Prior: Understanding 3D Structures for Casual Dynamic Videos
By: Zhuoyuan Wu , Xurui Yang , Jiahui Huang and more
Estimating accurate camera poses, 3D scene geometry, and object motion from in-the-wild videos is a long-standing challenge for classical structure from motion pipelines due to the presence of dynamic objects. Recent learning-based methods attempt to overcome this challenge by training motion estimators to filter dynamic objects and focus on the static background. However, their performance is largely limited by the availability of large-scale motion segmentation datasets, resulting in inaccurate segmentation and, therefore, inferior structural 3D understanding. In this work, we introduce the Dynamic Prior (\ourmodel) to robustly identify dynamic objects without task-specific training, leveraging the powerful reasoning capabilities of Vision-Language Models (VLMs) and the fine-grained spatial segmentation capacity of SAM2. \ourmodel can be seamlessly integrated into state-of-the-art pipelines for camera pose optimization, depth reconstruction, and 4D trajectory estimation. Extensive experiments on both synthetic and real-world videos demonstrate that \ourmodel not only achieves state-of-the-art performance on motion segmentation, but also significantly improves accuracy and robustness for structural 3D understanding.
Similar Papers
DynamicPose: Real-time and Robust 6D Object Pose Tracking for Fast-Moving Cameras and Objects
CV and Pattern Recognition
Tracks moving things even when camera and object zoom.
WALDO: Where Unseen Model-based 6D Pose Estimation Meets Occlusion
CV and Pattern Recognition
Helps robots see objects even when they're partly hidden.
Understanding Dynamic Scenes in Ego Centric 4D Point Clouds
CV and Pattern Recognition
Helps robots understand moving things and how they interact.