Differential Mamba
By: Nadav Schneider, Itamar Zimerman, Eliya Nachmani
Potential Business Impact:
Makes AI better at remembering and understanding long stories.
Sequence models like Transformers and RNNs often overallocate attention to irrelevant context, leading to noisy intermediate representations. This degrades LLM capabilities by promoting hallucinations, weakening long-range and retrieval abilities, and reducing robustness. Recent work has shown that differential design can mitigate this issue in Transformers, improving their effectiveness across various applications. In this paper, we explore whether these techniques, originally developed for Transformers, can be applied to Mamba, a recent architecture based on selective state-space layers that achieves Transformer-level performance with greater efficiency. We show that a naive adaptation of differential design to Mamba is insufficient and requires careful architectural modifications. To address this, we introduce a novel differential mechanism for Mamba, empirically validated on language modeling benchmarks, demonstrating improved retrieval capabilities and superior performance over vanilla Mamba. Finally, we conduct extensive ablation studies and empirical analyses to justify our design choices and provide evidence that our approach effectively mitigates the overallocation problem in Mamba-based models. Our code is publicly available.
Similar Papers
MambAttention: Mamba with Multi-Head Attention for Generalizable Single-Channel Speech Enhancement
Sound
Cleans up noisy audio for clearer sound.
Achilles' Heel of Mamba: Essential difficulties of the Mamba architecture demonstrated by synthetic data
Machine Learning (CS)
Mamba struggles with mirrored patterns.
Block-Biased Mamba for Long-Range Sequence Processing
Machine Learning (CS)
Makes AI better at remembering long stories.