Hidden State Poisoning Attacks against Mamba-based Language Models
By: Alexandre Le Mercier, Chris Develder, Thomas Demeester
Potential Business Impact:
Makes AI forget important things with special words.
State space models (SSMs) like Mamba offer efficient alternatives to Transformer-based language models, with linear time complexity. Yet, their adversarial robustness remains critically unexplored. This paper studies the phenomenon whereby specific short input phrases induce a partial amnesia effect in such models, by irreversibly overwriting information in their hidden states, referred to as a Hidden State Poisoning Attack (HiSPA). Our benchmark RoBench25 allows evaluating a model's information retrieval capabilities when subject to HiSPAs, and confirms the vulnerability of SSMs against such attacks. Even a recent 52B hybrid SSM-Transformer model from the Jamba family collapses on RoBench25 under optimized HiSPA triggers, whereas pure Transformers do not. We also observe that HiSPA triggers significantly weaken the Jamba model on the popular Open-Prompt-Injections benchmark, unlike pure Transformers. Finally, our interpretability study reveals patterns in Mamba's hidden layers during HiSPAs that could be used to build a HiSPA mitigation system. The full code and data to reproduce the experiments can be found at https://anonymous.4open.science/r/hispa_anonymous-5DB0.
Similar Papers
Hidden State Poisoning Attacks against Mamba-based Language Models
Computation and Language
Makes AI forget important things with sneaky tricks.
Characterizing Mamba's Selective Memory using Auto-Encoders
Computation and Language
Helps AI remember math and names better.
PerfMamba: Performance Analysis and Pruning of Selective State Space Models
Machine Learning (CS)
Makes computer models run faster and use less memory.