AdaSteer: Your Aligned LLM is Inherently an Adaptive Jailbreak Defender
By: Weixiang Zhao , Jiahe Guo , Yulin Hu and more
Potential Business Impact:
Keeps AI from saying bad things, even when tricked.
Despite extensive efforts in safety alignment, large language models (LLMs) remain vulnerable to jailbreak attacks. Activation steering offers a training-free defense method but relies on fixed steering coefficients, resulting in suboptimal protection and increased false rejections of benign inputs. To address this, we propose AdaSteer, an adaptive activation steering method that dynamically adjusts model behavior based on input characteristics. We identify two key properties: Rejection Law (R-Law), which shows that stronger steering is needed for jailbreak inputs opposing the rejection direction, and Harmfulness Law (H-Law), which differentiates adversarial and benign inputs. AdaSteer steers input representations along both the Rejection Direction (RD) and Harmfulness Direction (HD), with adaptive coefficients learned via logistic regression, ensuring robust jailbreak defense while preserving benign input handling. Experiments on LLaMA-3.1, Gemma-2, and Qwen2.5 show that AdaSteer outperforms baseline methods across multiple jailbreak attacks with minimal impact on utility. Our results highlight the potential of interpretable model internals for real-time, flexible safety enforcement in LLMs.
Similar Papers
AlphaSteer: Learning Refusal Steering with Principled Null-Space Constraint
Machine Learning (CS)
Keeps AI helpful, stops it from doing bad things.
Security Steerability is All You Need
Cryptography and Security
Makes AI follow rules to stop bad questions.
SafeSteer: Interpretable Safety Steering with Refusal-Evasion in LLMs
Machine Learning (CS)
Makes AI say safe things without refusing.