ViPE: Video Pose Engine for 3D Geometric Perception
By: Jiahui Huang , Qunjie Zhou , Hesam Rabeti and more
Potential Business Impact:
Makes robots understand 3D shapes from videos.
Accurate 3D geometric perception is an important prerequisite for a wide range of spatial AI systems. While state-of-the-art methods depend on large-scale training data, acquiring consistent and precise 3D annotations from in-the-wild videos remains a key challenge. In this work, we introduce ViPE, a handy and versatile video processing engine designed to bridge this gap. ViPE efficiently estimates camera intrinsics, camera motion, and dense, near-metric depth maps from unconstrained raw videos. It is robust to diverse scenarios, including dynamic selfie videos, cinematic shots, or dashcams, and supports various camera models such as pinhole, wide-angle, and 360{\deg} panoramas. We have benchmarked ViPE on multiple benchmarks. Notably, it outperforms existing uncalibrated pose estimation baselines by 18%/50% on TUM/KITTI sequences, and runs at 3-5FPS on a single GPU for standard input resolutions. We use ViPE to annotate a large-scale collection of videos. This collection includes around 100K real-world internet videos, 1M high-quality AI-generated videos, and 2K panoramic videos, totaling approximately 96M frames -- all annotated with accurate camera poses and dense depth maps. We open-source ViPE and the annotated dataset with the hope of accelerating the development of spatial AI systems.
Similar Papers
KM-ViPE: Online Tightly Coupled Vision-Language-Geometry Fusion for Open-Vocabulary Semantic SLAM
CV and Pattern Recognition
Lets robots understand and map moving things.
An End-to-End Framework for Video Multi-Person Pose Estimation
CV and Pattern Recognition
Tracks people's movements in videos better.
PersPose: 3D Human Pose Estimation with Perspective Encoding and Perspective Rotation
CV and Pattern Recognition
Helps computers guess body positions from pictures.