Steering MoE LLMs via Expert (De)Activation
By: Mohsen Fayyaz , Ali Modarressi , Hanieh Deilamsalehy and more
Potential Business Impact:
Controls AI behavior without changing its brain.
Mixture-of-Experts (MoE) in Large Language Models (LLMs) routes each token through a subset of specialized Feed-Forward Networks (FFN), known as experts. We present SteerMoE, a framework for steering MoE models by detecting and controlling behavior-linked experts. Our detection method identifies experts with distinct activation patterns across paired inputs exhibiting contrasting behaviors. By selectively (de)activating such experts during inference, we control behaviors like faithfulness and safety without retraining or modifying weights. Across 11 benchmarks and 6 LLMs, our steering raises safety by up to +20% and faithfulness by +27%. In adversarial attack mode, it drops safety by -41% alone, and -100% when combined with existing jailbreak methods, bypassing all safety guardrails and exposing a new dimension of alignment faking hidden within experts.
Similar Papers
Steer-MoE: Efficient Audio-Language Alignment with a Mixture-of-Experts Steering Module
Sound
Makes computers understand sounds like humans.
BadMoE: Backdooring Mixture-of-Experts LLMs via Optimizing Routing Triggers and Infecting Dormant Experts
Cryptography and Security
Makes AI models do bad things when told.
ExpertSteer: Intervening in LLMs through Expert Knowledge
Computation and Language
Guides AI to act as you want.