Representation Shift: Unifying Token Compression with FlashAttention
By: Joonmyung Choi , Sanghyeok Lee , Byungoh Ko and more
Potential Business Impact:
Makes AI faster by smartly dropping unneeded info.
Transformers have demonstrated remarkable success across vision, language, and video. Yet, increasing task complexity has led to larger models and more tokens, raising the quadratic cost of self-attention and the overhead of GPU memory access. To reduce the computation cost of self-attention, prior work has proposed token compression techniques that drop redundant or less informative tokens. Meanwhile, fused attention kernels such as FlashAttention have been developed to alleviate memory overhead by avoiding attention map construction and its associated I/O to HBM. This, however, makes it incompatible with most training-free token compression methods, which rely on attention maps to determine token importance. Here, we propose Representation Shift, a training-free, model-agnostic metric that measures the degree of change in each token's representation. This seamlessly integrates token compression with FlashAttention, without attention maps or retraining. Our method further generalizes beyond Transformers to CNNs and state space models. Extensive experiments show that Representation Shift enables effective token compression compatible with FlashAttention, yielding significant speedups of up to 5.5% and 4.4% in video-text retrieval and video QA, respectively. Code is available at https://github.com/mlvlab/Representation-Shift.
Similar Papers
Frequency-Aware Token Reduction for Efficient Vision Transformer
CV and Pattern Recognition
Makes computer vision faster and smarter.
Attention and Compression is all you need for Controllably Efficient Language Models
Machine Learning (CS)
Lets computers remember more with less effort.
SPOT: Sparsification with Attention Dynamics via Token Relevance in Vision Transformers
CV and Pattern Recognition
Makes computer vision faster by removing unneeded parts.