Score: 1

Radar: Fast Long-Context Decoding for Any Transformer

Published: March 13, 2025 | arXiv ID: 2503.10571v1

By: Yongchang Hao , Mengyao Zhai , Hossein Hajimirsadeghi and more

Potential Business Impact:

Makes AI understand long texts much faster.

Business Areas:
Text Analytics Data and Analytics, Software

Transformer models have demonstrated exceptional performance across a wide range of applications. Though forming the foundation of Transformer models, the dot-product attention does not scale well to long-context data since its time requirement grows quadratically with context length. In this work, we propose Radar, a training-free approach that accelerates inference by dynamically searching for the most important context tokens. For any pre-trained Transformer, Radar can reduce the decoding time complexity without training or heuristically evicting tokens. Moreover, we provide theoretical justification for our approach, demonstrating that Radar can reliably identify the most important tokens with high probability. We conduct extensive comparisons with the previous methods on a wide range of tasks. The results demonstrate that Radar achieves the state-of-the-art performance across different architectures with reduced time complexity, offering a practical solution for efficient long-context processing of Transformers.

Country of Origin
🇨🇦 Canada

Repos / Data Links

Page Count
23 pages

Category
Computer Science:
Machine Learning (CS)