A Unified Sparse Attention via Multi-Granularity Compression
By: Siran Liu, Zane Cao, Yongchao He
Potential Business Impact:
Makes AI understand long texts much faster.
Efficient long-context understanding and reasoning are increasingly vital for large language model (LLM) applications such as multi-turn dialogue and program analysis. However, the core self-attention mechanism scales quadratically with sequence length, creating a fundamental computational bottleneck. Existing sparse attention methods alleviate this issue but face trade-offs: training-based methods are costly and cannot be directly applied as acceleration plugins for other models, while inference-time methods often compromise efficiency or cross-modal generality. To address these limitations, we present UniSparse, a unified mechanism that introduces the notion of composite tokens--compact representations that aggregate multi-granularity contextual information. Building on this abstraction, UniSparse dynamically constructs sparse attention through multi-granularity compression and block-level selection, enabling efficient and hardware-friendly execution on GPU. Across multiple modalities and tasks ranging from synthetic benchmarks to real-world applications, UniSparse consistently surpasses state-of-the-art sparse attention methods (e.g., MInference, XAttention, FlexPrefill) in both accuracy and efficiency, achieving $\ge$ 99% of full-attention accuracy and up to 2.61$\times$ faster attention computation than FlashAttention.
Similar Papers
SSA: Sparse Sparse Attention by Aligning Full and Sparse Attention Outputs in Feature Space
Computation and Language
Makes AI understand long stories better, faster.
Making Every Head Count: Sparse Attention Without the Speed-Performance Trade-off
Machine Learning (CS)
Makes AI understand long texts much faster.
Training-free Context-adaptive Attention for Efficient Long Context Modeling
Computation and Language
Makes AI understand long texts faster.