PAGE-4D: Disentangled Pose and Geometry Estimation for 4D Perception
By: Kaichen Zhou , Yuhan Wang , Grace Chen and more
Potential Business Impact:
Helps cameras understand moving things in 3D.
Recent 3D feed-forward models, such as the Visual Geometry Grounded Transformer (VGGT), have shown strong capability in inferring 3D attributes of static scenes. However, since they are typically trained on static datasets, these models often struggle in real-world scenarios involving complex dynamic elements, such as moving humans or deformable objects like umbrellas. To address this limitation, we introduce PAGE-4D, a feedforward model that extends VGGT to dynamic scenes, enabling camera pose estimation, depth prediction, and point cloud reconstruction -- all without post-processing. A central challenge in multi-task 4D reconstruction is the inherent conflict between tasks: accurate camera pose estimation requires suppressing dynamic regions, while geometry reconstruction requires modeling them. To resolve this tension, we propose a dynamics-aware aggregator that disentangles static and dynamic information by predicting a dynamics-aware mask -- suppressing motion cues for pose estimation while amplifying them for geometry reconstruction. Extensive experiments show that PAGE-4D consistently outperforms the original VGGT in dynamic scenarios, achieving superior results in camera pose estimation, monocular and video depth estimation, and dense point map reconstruction.
Similar Papers
DGGT: Feedforward 4D Reconstruction of Dynamic Driving Scenes using Unposed Images
CV and Pattern Recognition
Lets self-driving cars see and remember 3D scenes.
4DLangVGGT: 4D Language-Visual Geometry Grounded Transformer
CV and Pattern Recognition
AI understands and describes moving 3D scenes.
DynaPose4D: High-Quality 4D Dynamic Content Generation via Pose Alignment Loss
CV and Pattern Recognition
Makes one picture move like a video.