RL-MTJail: Reinforcement Learning for Automated Black-Box Multi-Turn Jailbreaking of Large Language Models
By: Xiqiao Xiong , Ouxiang Li , Zhuo Liu and more
Potential Business Impact:
Teaches computers to trick other computers into saying bad things.
Large language models are vulnerable to jailbreak attacks, threatening their safe deployment in real-world applications. This paper studies black-box multi-turn jailbreaks, aiming to train attacker LLMs to elicit harmful content from black-box models through a sequence of prompt-output interactions. Existing approaches typically rely on single turn optimization, which is insufficient for learning long-term attack strategies. To bridge this gap, we formulate the problem as a multi-turn reinforcement learning task, directly optimizing the harmfulness of the final-turn output as the outcome reward. To mitigate sparse supervision and promote long-term attack strategies, we propose two heuristic process rewards: (1) controlling the harmfulness of intermediate outputs to prevent triggering the black-box model's rejection mechanisms, and (2) maintaining the semantic relevance of intermediate outputs to avoid drifting into irrelevant content. Experimental results on multiple benchmarks show consistently improved attack success rates across multiple models, highlighting the effectiveness of our approach. The code is available at https://github.com/xxiqiao/RL-MTJail. Warning: This paper contains examples of harmful content.
Similar Papers
Many-Turn Jailbreaking
Computation and Language
Makes AI assistants say bad things longer.
Multi-Turn Jailbreaks Are Simpler Than They Seem
Machine Learning (CS)
Makes AI safer by finding new ways to trick it.
Jailbreak-R1: Exploring the Jailbreak Capabilities of LLMs via Reinforcement Learning
Artificial Intelligence
Finds ways to make AI safer and more helpful.