DriveVGGT: Visual Geometry Transformer for Autonomous Driving
By: Xiaosong Jia , Yanhao Liu , Junqi You and more
Potential Business Impact:
Helps self-driving cars see better in 3D.
Feed-forward reconstruction has recently gained significant attention, with VGGT being a notable example. However, directly applying VGGT to autonomous driving (AD) systems leads to sub-optimal results due to the different priors between the two tasks. In AD systems, several important new priors need to be considered: (i) The overlap between camera views is minimal, as autonomous driving sensor setups are designed to achieve coverage at a low cost. (ii) The camera intrinsics and extrinsics are known, which introduces more constraints on the output and also enables the estimation of absolute scale. (iii) Relative positions of all cameras remain fixed though the ego vehicle is in motion. To fully integrate these priors into a feed-forward framework, we propose DriveVGGT, a scale-aware 4D reconstruction framework specifically designed for autonomous driving data. Specifically, we propose a Temporal Video Attention (TVA) module to process multi-camera videos independently, which better leverages the spatiotemporal continuity within each single-camera sequence. Then, we propose a Multi-camera Consistency Attention (MCA) module to conduct window attention with normalized relative pose embeddings, aiming to establish consistency relationships across different cameras while restricting each token to attend only to nearby frames. Finally, we extend the standard VGGT heads by adding an absolute scale head and an ego vehicle pose head. Experiments show that DriveVGGT outperforms VGGT, StreamVGGT, fastVGGT on autonomous driving dataset while extensive ablation studies verify effectiveness of the proposed designs.
Similar Papers
DGGT: Feedforward 4D Reconstruction of Dynamic Driving Scenes using Unposed Images
CV and Pattern Recognition
Lets self-driving cars see and remember 3D scenes.
Building temporally coherent 3D maps with VGGT for memory-efficient Semantic SLAM
CV and Pattern Recognition
Helps robots see and understand moving things.
VGD: Visual Geometry Gaussian Splatting for Feed-Forward Surround-view Driving Reconstruction
CV and Pattern Recognition
Makes self-driving cars see better from all sides.