Score: 1

HTTM: Head-wise Temporal Token Merging for Faster VGGT

Published: November 26, 2025 | arXiv ID: 2511.21317v1

By: Weitian Wang , Lukas Meiner , Rai Shubham and more

Potential Business Impact:

Makes 3D scene building much faster.

Business Areas:
Image Recognition Data and Analytics, Software

The Visual Geometry Grounded Transformer (VGGT) marks a significant leap forward in 3D scene reconstruction, as it is the first model that directly infers all key 3D attributes (camera poses, depths, and dense geometry) jointly in one pass. However, this joint inference mechanism requires global attention layers that perform all-to-all attention computation on tokens from all views. For reconstruction of large scenes with long-sequence inputs, this causes a significant latency bottleneck. In this paper, we propose head-wise temporal merging (HTTM), a training-free 3D token merging method for accelerating VGGT. Existing merging techniques merge tokens uniformly across different attention heads, resulting in identical tokens in the layers' output, which hinders the model's representational ability. HTTM tackles this problem by merging tokens in multi-head granularity, which preserves the uniqueness of feature tokens after head concatenation. Additionally, this enables HTTM to leverage the spatial locality and temporal correspondence observed at the head level to achieve higher merging ratios with lower merging costs compared to existing methods. Thus, HTTM achieves up to 7x acceleration with negligible performance drops in a GPU-based inference.

Country of Origin
🇩🇪 Germany

Page Count
15 pages

Category
Computer Science:
CV and Pattern Recognition