Sliding Window Recurrences for Sequence Models
By: Dragos Secrieru , Garyk Brixi , Yoshua Bengio and more
Potential Business Impact:
Makes AI understand long stories much faster.
Multi-hybrid architectures are poised to take over language modeling due to better quality and performance. We introduce a hierarchical decomposition framework for linear recurrences that allows us to develop algorithms aligned with GPU memory hierarchies, yielding Sliding Window Recurrences. We focus specifically on truncating recurrences to hardware-aligned windows which are naturally jagged, limiting costly inter-warp communication. Using SWR, we develop Phalanx layers that serve as drop-in replacements for windowed attention or linear recurrences. In 1B parameter multi-hybrid models, Phalanx achieves over 10-40% speedup across 4K to 32K context length over optimized Transformers while matching perplexity.
Similar Papers
Systems and Algorithms for Convolutional Multi-Hybrid Language Models at Scale
Machine Learning (CS)
Makes AI learn and remember much faster.
ENA: Efficient N-dimensional Attention
Machine Learning (CS)
Helps computers understand complex, long data faster.
Sliding Window Attention Adaptation
Computation and Language
Lets computers understand long stories faster.