Dynamic Point Maps: A Versatile Representation for Dynamic 3D Reconstruction
By: Edgar Sucar , Zihang Lai , Eldar Insafutdinov and more
Potential Business Impact:
Tracks moving things in 3D video.
DUSt3R has recently shown that one can reduce many tasks in multi-view geometry, including estimating camera intrinsics and extrinsics, reconstructing the scene in 3D, and establishing image correspondences, to the prediction of a pair of viewpoint-invariant point maps, i.e., pixel-aligned point clouds defined in a common reference frame. This formulation is elegant and powerful, but unable to tackle dynamic scenes. To address this challenge, we introduce the concept of Dynamic Point Maps (DPM), extending standard point maps to support 4D tasks such as motion segmentation, scene flow estimation, 3D object tracking, and 2D correspondence. Our key intuition is that, when time is introduced, there are several possible spatial and time references that can be used to define the point maps. We identify a minimal subset of such combinations that can be regressed by a network to solve the sub tasks mentioned above. We train a DPM predictor on a mixture of synthetic and real data and evaluate it across diverse benchmarks for video depth prediction, dynamic point cloud reconstruction, 3D scene flow and object pose tracking, achieving state-of-the-art performance. Code, models and additional results are available at https://www.robots.ox.ac.uk/~vgg/research/dynamic-point-maps/.
Similar Papers
D^2USt3R: Enhancing 3D Reconstruction with 4D Pointmaps for Dynamic Scenes
CV and Pattern Recognition
Makes 3D models of moving things better.
C4D: 4D Made from 3D through Dual Correspondences
CV and Pattern Recognition
Makes videos show moving things in 3D.
DePT3R: Joint Dense Point Tracking and 3D Reconstruction of Dynamic Scenes in a Single Forward Pass
CV and Pattern Recognition
Makes 3D movies from many pictures.