Beyond I'm Sorry, I Can't: Dissecting Large Language Model Refusal
By: Nirmalendu Prakash , Yeo Wei Jie , Amir Abdullah and more
Potential Business Impact:
Makes AI ignore safety rules to answer bad questions.
Refusal on harmful prompts is a key safety behaviour in instruction-tuned large language models (LLMs), yet the internal causes of this behaviour remain poorly understood. We study two public instruction-tuned models, Gemma-2-2B-IT and LLaMA-3.1-8B-IT, using sparse autoencoders (SAEs) trained on residual-stream activations. Given a harmful prompt, we search the SAE latent space for feature sets whose ablation flips the model from refusal to compliance, demonstrating causal influence and creating a jailbreak. Our search proceeds in three stages: (1) Refusal Direction: find a refusal-mediating direction and collect SAE features near that direction; (2) Greedy Filtering: prune to a minimal set; and (3) Interaction Discovery: fit a factorization machine (FM) that captures nonlinear interactions among the remaining active features and the minimal set. This pipeline yields a broad set of jailbreak-critical features, offering insight into the mechanistic basis of refusal. Moreover, we find evidence of redundant features that remain dormant unless earlier features are suppressed. Our findings highlight the potential for fine-grained auditing and targeted intervention in safety behaviours by manipulating the interpretable latent space.
Similar Papers
From Rogue to Safe AI: The Role of Explicit Refusals in Aligning LLMs with International Humanitarian Law
Computers and Society
AI learns to refuse illegal or harmful requests.
Think Before Refusal : Triggering Safety Reflection in LLMs to Mitigate False Refusal Behavior
Computation and Language
Helps AI answer questions without wrongly saying "no."
Should LLM Safety Be More Than Refusing Harmful Instructions?
Computation and Language
Makes AI safer with tricky hidden words.