Score: 1

GatedFWA: Linear Flash Windowed Attention with Gated Associative Memory

Published: December 8, 2025 | arXiv ID: 2512.07782v1

By: Jiaxu Liu, Yuhe Bai, Christos-Savvas Bouganis

Potential Business Impact:

Makes AI models learn faster and remember more.

Business Areas:
Field-Programmable Gate Array (FPGA) Hardware

Modern autoregressive models rely on attention, yet the Softmax full attention in Transformers scales quadratically with sequence length. Sliding Window Attention (SWA) achieves linear-time encoding/decoding by constraining the attention pattern, but under an \textit{Associative Memory} interpretation, its difference-style update renders the training objective effectively \emph{unbounded}. In contrast, Softmax attention normalizes updates, leading to \emph{memory shrinkage and gradient vanishing}. We propose GatedFWA: a Memory-\underline{Gated} (\underline{F}lash) \underline{W}indowed \underline{A}ttention mechanism that preserves SWAs efficiency while stabilizing memory updates and making gradient flow controllable. In essence, GatedFWA accumulate a per-token/head gate into a decay bias added to the attention logits, acting as a learnable contraction in the memory recurrence. We implement a fused one-pass gate preprocessing and a FlashAttention-compatible kernel that injects the gate under a sliding mask, ensuring I/O efficiency and numerical stability. On language modelling benchmarks, GatedFWA delivers competitive throughput with negligible overhead and better use of global context, and it integrates cleanly with token compression/selection methods such as NSA and generalizes to various autoregressive domains.

Country of Origin
🇫🇷 🇬🇧 France, United Kingdom

Page Count
20 pages

Category
Computer Science:
Machine Learning (CS)