DVGT: Driving Visual Geometry Transformer
By: Sicheng Zuo , Zixun Xie , Wenzhao Zheng and more
Potential Business Impact:
Helps cars see and map the world in 3D.
Perceiving and reconstructing 3D scene geometry from visual inputs is crucial for autonomous driving. However, there still lacks a driving-targeted dense geometry perception model that can adapt to different scenarios and camera configurations. To bridge this gap, we propose a Driving Visual Geometry Transformer (DVGT), which reconstructs a global dense 3D point map from a sequence of unposed multi-view visual inputs. We first extract visual features for each image using a DINO backbone, and employ alternating intra-view local attention, cross-view spatial attention, and cross-frame temporal attention to infer geometric relations across images. We then use multiple heads to decode a global point map in the ego coordinate of the first frame and the ego poses for each frame. Unlike conventional methods that rely on precise camera parameters, DVGT is free of explicit 3D geometric priors, enabling flexible processing of arbitrary camera configurations. DVGT directly predicts metric-scaled geometry from image sequences, eliminating the need for post-alignment with external sensors. Trained on a large mixture of driving datasets including nuScenes, OpenScene, Waymo, KITTI, and DDAD, DVGT significantly outperforms existing models on various scenarios. Code is available at https://github.com/wzzheng/DVGT.
Similar Papers
DriveVGGT: Visual Geometry Transformer for Autonomous Driving
CV and Pattern Recognition
Helps self-driving cars see better in 3D.
DGGT: Feedforward 4D Reconstruction of Dynamic Driving Scenes using Unposed Images
CV and Pattern Recognition
Lets self-driving cars see and remember 3D scenes.
On Geometric Understanding and Learned Data Priors in VGGT
CV and Pattern Recognition
Helps computers understand 3D scenes from pictures.