AlphaSteer: Learning Refusal Steering with Principled Null-Space Constraint
By: Leheng Sheng , Changshuo Shen , Weixiang Zhao and more
Potential Business Impact:
Keeps AI helpful, stops it from doing bad things.
As LLMs are increasingly deployed in real-world applications, ensuring their ability to refuse malicious prompts, especially jailbreak attacks, is essential for safe and reliable use. Recently, activation steering has emerged as an effective approach for enhancing LLM safety by adding a refusal direction vector to internal activations of LLMs during inference, which will further induce the refusal behaviors of LLMs. However, indiscriminately applying activation steering fundamentally suffers from the trade-off between safety and utility, since the same steering vector can also lead to over-refusal and degraded performance on benign prompts. Although prior efforts, such as vector calibration and conditional steering, have attempted to mitigate this trade-off, their lack of theoretical grounding limits their robustness and effectiveness. To better address the trade-off between safety and utility, we present a theoretically grounded and empirically effective activation steering method called AlphaSteer. Specifically, it considers activation steering as a learnable process with two principled learning objectives: utility preservation and safety enhancement. For utility preservation, it learns to construct a nearly zero vector for steering benign data, with the null-space constraints. For safety enhancement, it learns to construct a refusal direction vector for steering malicious data, with the help of linear regression. Experiments across multiple jailbreak attacks and utility benchmarks demonstrate the effectiveness of AlphaSteer, which significantly improves the safety of LLMs without compromising general capabilities. Our codes are available at https://github.com/AlphaLab-USTC/AlphaSteer.
Similar Papers
SafeSteer: Interpretable Safety Steering with Refusal-Evasion in LLMs
Machine Learning (CS)
Makes AI say safe things without refusing.
AdaSteer: Your Aligned LLM is Inherently an Adaptive Jailbreak Defender
Cryptography and Security
Keeps AI from saying bad things, even when tricked.
Refusal Steering: Fine-grained Control over LLM Refusal Behaviour for Sensitive Topics
Computation and Language
Controls AI's opinions on sensitive topics.