Refusal Steering: Fine-grained Control over LLM Refusal Behaviour for Sensitive Topics
By: Iker García-Ferrero, David Montero, Roman Orus
Potential Business Impact:
Controls AI's opinions on sensitive topics.
We introduce Refusal Steering, an inference-time method to exercise fine-grained control over Large Language Models refusal behaviour on politically sensitive topics without retraining. We replace fragile pattern-based refusal detection with an LLM-as-a-judge that assigns refusal confidence scores and we propose a ridge-regularized variant to compute steering vectors that better isolate the refusal--compliance direction. On Qwen3-Next-80B-A3B-Thinking, our method removes the refusal behaviour of the model around politically sensitive topics while maintaining safety on JailbreakBench and near-baseline performance on general benchmarks. The approach generalizes across 4B and 80B models and can also induce targeted refusals when desired. We analize the steering vectors and show that refusal signals concentrate in deeper layers of the transformer and are distributed across many dimensions. Together, these results demonstrate that activation steering can remove political refusal behaviour while retaining safety alignment for harmful content, offering a practical path to controllable, transparent moderation at inference time.
Similar Papers
SafeSteer: Interpretable Safety Steering with Refusal-Evasion in LLMs
Machine Learning (CS)
Makes AI say safe things without refusing.
Energy-Driven Steering: Reducing False Refusals in Large Language Models
Machine Learning (CS)
Makes AI helpful without being too scared.
Steering the CensorShip: Uncovering Representation Vectors for LLM "Thought" Control
Computation and Language
Lets computers share more honest answers.