Align in Depth: Defending Jailbreak Attacks via Progressive Answer Detoxification
By: Yingjie Zhang , Tong Liu , Zhe Zhao and more
Potential Business Impact:
Stops AI from saying bad things when tricked.
Large Language Models (LLMs) are vulnerable to jailbreak attacks, which use crafted prompts to elicit toxic responses. These attacks exploit LLMs' difficulty in dynamically detecting harmful intents during the generation process. Traditional safety alignment methods, often relying on the initial few generation steps, are ineffective due to limited computational budget. This paper proposes DEEPALIGN, a robust defense framework that fine-tunes LLMs to progressively detoxify generated content, significantly improving both the computational budget and effectiveness of mitigating harmful generation. Our approach uses a hybrid loss function operating on hidden states to directly improve LLMs' inherent awareness of toxity during generation. Furthermore, we redefine safe responses by generating semantically relevant answers to harmful queries, thereby increasing robustness against representation-mutation attacks. Evaluations across multiple LLMs demonstrate state-of-the-art defense performance against six different attack types, reducing Attack Success Rates by up to two orders of magnitude compared to previous state-of-the-art defense while preserving utility. This work advances LLM safety by addressing limitations of conventional alignment through dynamic, context-aware mitigation.
Similar Papers
Defending Large Language Models Against Jailbreak Exploits with Responsible AI Considerations
Cryptography and Security
Stops AI from saying bad or unsafe things.
Bypassing Prompt Guards in Production with Controlled-Release Prompting
Machine Learning (CS)
Breaks AI safety rules by tricking chatbots.
A Simple and Efficient Jailbreak Method Exploiting LLMs' Helpfulness
Cryptography and Security
Finds ways to trick AI into saying bad things.