Multi-Turn Jailbreaks Are Simpler Than They Seem
By: Xiaoxue Yang , Jaeha Lee , Anna-Katharina Dick and more
Potential Business Impact:
Makes AI safer by finding new ways to trick it.
While defenses against single-turn jailbreak attacks on Large Language Models (LLMs) have improved significantly, multi-turn jailbreaks remain a persistent vulnerability, often achieving success rates exceeding 70% against models optimized for single-turn protection. This work presents an empirical analysis of automated multi-turn jailbreak attacks across state-of-the-art models including GPT-4, Claude, and Gemini variants, using the StrongREJECT benchmark. Our findings challenge the perceived sophistication of multi-turn attacks: when accounting for the attacker's ability to learn from how models refuse harmful requests, multi-turn jailbreaking approaches are approximately equivalent to simply resampling single-turn attacks multiple times. Moreover, attack success is correlated among similar models, making it easier to jailbreak newly released ones. Additionally, for reasoning models, we find surprisingly that higher reasoning effort often leads to higher attack success rates. Our results have important implications for AI safety evaluation and the design of jailbreak-resistant systems. We release the source code at https://github.com/diogo-cruz/multi_turn_simpler
Similar Papers
Many-Turn Jailbreaking
Computation and Language
Makes AI assistants say bad things longer.
M2S: Multi-turn to Single-turn jailbreak in Red Teaming for LLMs
Computation and Language
Makes AI safer by finding its hidden tricks.
AutoAdv: Automated Adversarial Prompting for Multi-Turn Jailbreaking of Large Language Models
Computation and Language
Makes AI assistants more easily tricked.