Score: 1

Characterizing Mamba's Selective Memory using Auto-Encoders

Published: December 17, 2025 | arXiv ID: 2512.15653v1

By: Tamanna Hossain , Robert L. Logan , Ganesh Jagadeesan and more

Potential Business Impact:

Helps AI remember math and names better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

State space models (SSMs) are a promising alternative to transformers for language modeling because they use fixed memory during inference. However, this fixed memory usage requires some information loss in the hidden state when processing long sequences. While prior work has studied the sequence length at which this information loss occurs, it does not characterize the types of information SSM language models (LMs) tend to forget. In this paper, we address this knowledge gap by identifying the types of tokens (e.g., parts of speech, named entities) and sequences (e.g., code, math problems) that are more frequently forgotten by SSM LMs. We achieve this by training an auto-encoder to reconstruct sequences from the SSM's hidden state, and measure information loss by comparing inputs with their reconstructions. We perform experiments using the Mamba family of SSM LMs (130M--1.4B) on sequences ranging from 4--256 tokens. Our results show significantly higher rates of information loss on math-related tokens (e.g., numbers, variables), mentions of organization entities, and alternative dialects to Standard American English. We then examine the frequency that these tokens appear in Mamba's pretraining data and find that less prevalent tokens tend to be the ones Mamba is most likely to forget. By identifying these patterns, our work provides clear direction for future research to develop methods that better control Mamba's ability to retain important information.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
16 pages

Category
Computer Science:
Computation and Language