Score: 2

RL-MTJail: Reinforcement Learning for Automated Black-Box Multi-Turn Jailbreaking of Large Language Models

Published: December 8, 2025 | arXiv ID: 2512.07761v1

By: Xiqiao Xiong , Ouxiang Li , Zhuo Liu and more

Potential Business Impact:

Teaches computers to trick other computers into saying bad things.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large language models are vulnerable to jailbreak attacks, threatening their safe deployment in real-world applications. This paper studies black-box multi-turn jailbreaks, aiming to train attacker LLMs to elicit harmful content from black-box models through a sequence of prompt-output interactions. Existing approaches typically rely on single turn optimization, which is insufficient for learning long-term attack strategies. To bridge this gap, we formulate the problem as a multi-turn reinforcement learning task, directly optimizing the harmfulness of the final-turn output as the outcome reward. To mitigate sparse supervision and promote long-term attack strategies, we propose two heuristic process rewards: (1) controlling the harmfulness of intermediate outputs to prevent triggering the black-box model's rejection mechanisms, and (2) maintaining the semantic relevance of intermediate outputs to avoid drifting into irrelevant content. Experimental results on multiple benchmarks show consistently improved attack success rates across multiple models, highlighting the effectiveness of our approach. The code is available at https://github.com/xxiqiao/RL-MTJail. Warning: This paper contains examples of harmful content.

Country of Origin
πŸ‡¨πŸ‡³ πŸ‡ΈπŸ‡¬ Singapore, China

Repos / Data Links

Page Count
19 pages

Category
Computer Science:
Artificial Intelligence