XStreamVGGT: Extremely Memory-Efficient Streaming Vision Geometry Grounded Transformer with KV Cache Compression
By: Zunhai Su , Weihao Ye , Hansen Feng and more
Potential Business Impact:
Makes 3D video processing use much less memory.
Learning-based 3D visual geometry models have benefited substantially from large-scale transformers. Among these, StreamVGGT leverages frame-wise causal attention for strong streaming reconstruction, but suffers from unbounded KV cache growth, leading to escalating memory consumption and inference latency as input frames accumulate. We propose XStreamVGGT, a tuning-free approach that systematically compresses the KV cache through joint pruning and quantization, enabling extremely memory-efficient streaming inference. Specifically, redundant KVs originating from multi-view inputs are pruned through efficient token importance identification, enabling a fixed memory budget. Leveraging the unique distribution of KV tensors, we incorporate KV quantization to further reduce memory consumption. Extensive evaluations show that XStreamVGGT achieves mostly negligible performance degradation while substantially reducing memory usage by 4.42$\times$ and accelerating inference by 5.48$\times$, enabling scalable and practical streaming 3D applications. The code is available at https://github.com/ywh187/XStreamVGGT/.
Similar Papers
InfiniteVGGT: Visual Geometry Grounded Transformer for Endless Streams
CV and Pattern Recognition
Lets computers remember 3D shapes forever.
FlashVGGT: Efficient and Scalable Visual Geometry Transformers with Compressed Descriptor Attention
CV and Pattern Recognition
Makes 3D pictures from many photos faster.
LiteVGGT: Boosting Vanilla VGGT via Geometry-aware Cached Token Merging
CV and Pattern Recognition
Makes 3D pictures from many photos faster.