VGGT: Visual Geometry Grounded Transformer
By: Jianyuan Wang , Minghao Chen , Nikita Karaev and more
Potential Business Impact:
Creates 3D worlds from pictures in seconds.
We present VGGT, a feed-forward neural network that directly infers all key 3D attributes of a scene, including camera parameters, point maps, depth maps, and 3D point tracks, from one, a few, or hundreds of its views. This approach is a step forward in 3D computer vision, where models have typically been constrained to and specialized for single tasks. It is also simple and efficient, reconstructing images in under one second, and still outperforming alternatives that require post-processing with visual geometry optimization techniques. The network achieves state-of-the-art results in multiple 3D tasks, including camera parameter estimation, multi-view depth estimation, dense point cloud reconstruction, and 3D point tracking. We also show that using pretrained VGGT as a feature backbone significantly enhances downstream tasks, such as non-rigid point tracking and feed-forward novel view synthesis. Code and models are publicly available at https://github.com/facebookresearch/vggt.
Similar Papers
On Geometric Understanding and Learned Data Priors in VGGT
CV and Pattern Recognition
Helps computers understand 3D scenes from pictures.
Quantized Visual Geometry Grounded Transformer
CV and Pattern Recognition
Makes 3D cameras faster and smaller.
DriveVGGT: Visual Geometry Transformer for Autonomous Driving
CV and Pattern Recognition
Helps self-driving cars see better in 3D.