Understanding Input Selectivity in Mamba: Impact on Approximation Power, Memorization, and Associative Recall Capacity
By: Ningyuan Huang , Miguel Sarabia , Abhinav Moudgil and more
Potential Business Impact:
Mamba learns better by choosing what to remember.
State-Space Models (SSMs), and particularly Mamba, have recently emerged as a promising alternative to Transformers. Mamba introduces input selectivity to its SSM layer (S6) and incorporates convolution and gating into its block definition. While these modifications do improve Mamba's performance over its SSM predecessors, it remains largely unclear how Mamba leverages the additional functionalities provided by input selectivity, and how these interact with the other operations in the Mamba architecture. In this work, we demystify the role of input selectivity in Mamba, investigating its impact on function approximation power, long-term memorization, and associative recall capabilities. In particular: (i) we prove that the S6 layer of Mamba can represent projections onto Haar wavelets, providing an edge over its Diagonal SSM (S4D) predecessor in approximating discontinuous functions commonly arising in practice; (ii) we show how the S6 layer can dynamically counteract memory decay; (iii) we provide analytical solutions to the MQAR associative recall task using the Mamba architecture with different mixers -- Mamba, Mamba-2, and S4D. We demonstrate the tightness of our theoretical constructions with empirical results on concrete tasks. Our findings offer a mechanistic understanding of Mamba and reveal opportunities for improvement.
Similar Papers
PerfMamba: Performance Analysis and Pruning of Selective State Space Models
Machine Learning (CS)
Makes computer models run faster and use less memory.
Block-Biased Mamba for Long-Range Sequence Processing
Machine Learning (CS)
Makes AI better at remembering long stories.
Characterizing Mamba's Selective Memory using Auto-Encoders
Computation and Language
Helps AI remember math and names better.