Radar: Fast Long-Context Decoding for Any Transformer
By: Yongchang Hao , Mengyao Zhai , Hossein Hajimirsadeghi and more
Potential Business Impact:
Makes AI understand long texts much faster.
Transformer models have demonstrated exceptional performance across a wide range of applications. Though forming the foundation of Transformer models, the dot-product attention does not scale well to long-context data since its time requirement grows quadratically with context length. In this work, we propose Radar, a training-free approach that accelerates inference by dynamically searching for the most important context tokens. For any pre-trained Transformer, Radar can reduce the decoding time complexity without training or heuristically evicting tokens. Moreover, we provide theoretical justification for our approach, demonstrating that Radar can reliably identify the most important tokens with high probability. We conduct extensive comparisons with the previous methods on a wide range of tasks. The results demonstrate that Radar achieves the state-of-the-art performance across different architectures with reduced time complexity, offering a practical solution for efficient long-context processing of Transformers.
Similar Papers
Radar Pulse Deinterleaving with Transformer Based Deep Metric Learning
Signal Processing
Sorts radar signals from different sources automatically.
Multipole Attention for Efficient Long Context Reasoning
Computation and Language
Makes smart computers think faster and better.
A Survey on Transformer Context Extension: Approaches and Evaluation
Computation and Language
Helps computers understand long stories better.