MUSE: MCTS-Driven Red Teaming Framework for Enhanced Multi-Turn Dialogue Safety in Large Language Models
By: Siyu Yan , Long Zeng , Xuecheng Wu and more
Potential Business Impact:
Stops AI from being tricked into saying bad things.
As large language models~(LLMs) become widely adopted, ensuring their alignment with human values is crucial to prevent jailbreaks where adversaries manipulate models to produce harmful content. While most defenses target single-turn attacks, real-world usage often involves multi-turn dialogues, exposing models to attacks that exploit conversational context to bypass safety measures. We introduce MUSE, a comprehensive framework tackling multi-turn jailbreaks from both attack and defense angles. For attacks, we propose MUSE-A, a method that uses frame semantics and heuristic tree search to explore diverse semantic trajectories. For defense, we present MUSE-D, a fine-grained safety alignment approach that intervenes early in dialogues to reduce vulnerabilities. Extensive experiments on various models show that MUSE effectively identifies and mitigates multi-turn vulnerabilities. Code is available at \href{https://github.com/yansiyu02/MUSE}{https://github.com/yansiyu02/MUSE}.
Similar Papers
SafeMT: Multi-turn Safety for Multimodal Language Models
Computation and Language
Makes AI safer in long, tricky talks.
X-Teaming: Multi-Turn Jailbreaks and Defenses with Adaptive Multi-Agents
Cryptography and Security
Finds ways AI can be tricked in conversations.
M2S: Multi-turn to Single-turn jailbreak in Red Teaming for LLMs
Computation and Language
Makes AI safer by finding its hidden tricks.