Unraveling LLM Jailbreaks Through Safety Knowledge Neurons
By: Chongwen Zhao, Kaizhu Huang
Potential Business Impact:
Makes AI safer from bad instructions.
Large Language Models (LLMs) are increasingly attracting attention in various applications. Nonetheless, there is a growing concern as some users attempt to exploit these models for malicious purposes, including the synthesis of controlled substances and the propagation of disinformation, a technique known as "Jailbreak." While some studies have achieved defenses against jailbreak attacks by modifying output distributions or detecting harmful content, the exact rationale still remains elusive. In this work, we present a novel neuron-level interpretability method that focuses on the role of safety-related knowledge neurons. Unlike existing approaches, our method projects the model's internal representation into a more consistent and interpretable vocabulary space. We then show that adjusting the activation of safety-related neurons can effectively control the model's behavior with a mean ASR higher than 97%. Building on this insight, we propose SafeTuning, a fine-tuning strategy that reinforces safety-critical neurons to improve model robustness against jailbreaks. SafeTuning consistently reduces attack success rates across multiple LLMs and outperforms all four baseline defenses. These findings offer a new perspective on understanding and defending against jailbreak attacks.
Similar Papers
SafeLLM: Unlearning Harmful Outputs from Large Language Models against Jailbreak Attacks
Machine Learning (CS)
Makes AI models forget bad things they learned.
NeuRel-Attack: Neuron Relearning for Safety Disalignment in Large Language Models
Machine Learning (CS)
Makes AI say bad things it was told not to.
Unified Defense for Large Language Models against Jailbreak and Fine-Tuning Attacks in Education
Computation and Language
Keeps AI tutors from giving bad answers.