Score: 2

Sliding Window Recurrences for Sequence Models

Published: December 15, 2025 | arXiv ID: 2512.13921v1

By: Dragos Secrieru , Garyk Brixi , Yoshua Bengio and more

BigTech Affiliations: Stanford University

Potential Business Impact:

Makes AI understand long stories much faster.

Business Areas:
A/B Testing Data and Analytics

Multi-hybrid architectures are poised to take over language modeling due to better quality and performance. We introduce a hierarchical decomposition framework for linear recurrences that allows us to develop algorithms aligned with GPU memory hierarchies, yielding Sliding Window Recurrences. We focus specifically on truncating recurrences to hardware-aligned windows which are naturally jagged, limiting costly inter-warp communication. Using SWR, we develop Phalanx layers that serve as drop-in replacements for windowed attention or linear recurrences. In 1B parameter multi-hybrid models, Phalanx achieves over 10-40% speedup across 4K to 32K context length over optimized Transformers while matching perplexity.

Country of Origin
πŸ‡ΊπŸ‡Έ πŸ‡―πŸ‡΅ Japan, United States

Page Count
30 pages

Category
Computer Science:
Machine Learning (CS)