VersaQ-3D: A Reconfigurable Accelerator Enabling Feed-Forward and Generalizable 3D Reconstruction via Versatile Quantization
By: Yipu Zhang , Jintao Cheng , Xingyu Liu and more
Potential Business Impact:
Makes 3D pictures from photos on phones.
The Visual Geometry Grounded Transformer (VGGT) enables strong feed-forward 3D reconstruction without per-scene optimization. However, its billion-parameter scale creates high memory and compute demands, hindering on-device deployment. Existing LLM quantization methods fail on VGGT due to saturated activation channels and diverse 3D semantics, which cause unreliable calibration. Furthermore, VGGT presents hardware challenges regarding precision-sensitive nonlinear operators and memory-intensive global attention. To address this, we propose VersaQ-3D, an algorithm-architecture co-design framework. Algorithmically, we introduce the first calibration-free, scene-agnostic quantization for VGGT down to 4-bit, leveraging orthogonal transforms to decorrelate features and suppress outliers. Architecturally, we design a reconfigurable accelerator supporting BF16, INT8, and INT4. A unified systolic datapath handles both linear and nonlinear operators, reducing latency by 60%, while two-stage recomputation-based tiling alleviates memory pressure for long-sequence attention. Evaluations show VersaQ-3D preserves 98-99% accuracy at W4A8. At W4A4, it outperforms prior methods by 1.61x-2.39x across diverse scenes. The accelerator delivers 5.2x-10.8x speedup over edge GPUs with low power, enabling efficient instant 3D reconstruction.
Similar Papers
Quantized Visual Geometry Grounded Transformer
CV and Pattern Recognition
Makes 3D cameras faster and smaller.
SwiftVGGT: A Scalable Visual Geometry Grounded Transformer for Large-Scale Scenes
CV and Pattern Recognition
Builds detailed 3D maps much faster.
Building temporally coherent 3D maps with VGGT for memory-efficient Semantic SLAM
CV and Pattern Recognition
Helps robots see and understand moving things.