Score: 1

Fixed-Point RNNs: Interpolating from Diagonal to Dense

Published: March 13, 2025 | arXiv ID: 2503.10799v2

By: Sajad Movahedi , Felix Sarnthein , Nicola Muca Cirone and more

Potential Business Impact:

Makes AI learn faster and remember more.

Business Areas:
A/B Testing Data and Analytics

Linear recurrent neural networks (RNNs) and state-space models (SSMs) such as Mamba have become promising alternatives to softmax-attention as sequence mixing layers in Transformer architectures. Current models, however, do not exhibit the full state-tracking expressivity of RNNs because they rely on channel-wise (i.e. diagonal) sequence mixing. In this paper, we investigate parameterizations of a large class of dense linear RNNs as fixed-points of parallelizable diagonal linear RNNs. The resulting models can naturally trade expressivity for efficiency at a fixed number of parameters and achieve state-of-the-art results on the commonly used toy tasks $A_5$, $S_5$, copying, and modular arithmetics.

Page Count
27 pages

Category
Computer Science:
Machine Learning (CS)