JPU: Bridging Jailbreak Defense and Unlearning via On-Policy Path Rectification
By: Xi Wang , Songlei Jian , Shasha Li and more
Potential Business Impact:
Fixes AI so it can't be tricked into saying bad things.
Despite extensive safety alignment, Large Language Models (LLMs) often fail against jailbreak attacks. While machine unlearning has emerged as a promising defense by erasing specific harmful parameters, current methods remain vulnerable to diverse jailbreaks. We first conduct an empirical study and discover that this failure mechanism is caused by jailbreaks primarily activating non-erased parameters in the intermediate layers. Further, by probing the underlying mechanism through which these circumvented parameters reassemble into the prohibited output, we verify the persistent existence of dynamic $\textbf{jailbreak paths}$ and show that the inability to rectify them constitutes the fundamental gap in existing unlearning defenses. To bridge this gap, we propose $\textbf{J}$ailbreak $\textbf{P}$ath $\textbf{U}$nlearning (JPU), which is the first to rectify dynamic jailbreak paths towards safety anchors by dynamically mining on-policy adversarial samples to expose vulnerabilities and identify jailbreak paths. Extensive experiments demonstrate that JPU significantly enhances jailbreak resistance against dynamic attacks while preserving the model's utility.
Similar Papers
SafeLLM: Unlearning Harmful Outputs from Large Language Models against Jailbreak Attacks
Machine Learning (CS)
Makes AI models forget bad things they learned.
The Dual Power of Interpretable Token Embeddings: Jailbreaking Attacks and Defenses for Diffusion Model Unlearning
CV and Pattern Recognition
Makes AI safer by stopping bad words.
Probabilistic Modeling of Jailbreak on Multimodal LLMs: From Quantification to Application
Cryptography and Security
Makes AI safer from harmful prompts.