Score: 1

Jailbreaking Large Language Models through Iterative Tool-Disguised Attacks via Reinforcement Learning

Published: January 9, 2026 | arXiv ID: 2601.05466v1

By: Zhaoqi Wang , Zijian Zhang , Daqing He and more

Potential Business Impact:

Tricks AI into saying bad things it shouldn't.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large language models (LLMs) have demonstrated remarkable capabilities across diverse applications, however, they remain critically vulnerable to jailbreak attacks that elicit harmful responses violating human values and safety guidelines. Despite extensive research on defense mechanisms, existing safeguards prove insufficient against sophisticated adversarial strategies. In this work, we propose iMIST (\underline{i}nteractive \underline{M}ulti-step \underline{P}rogre\underline{s}sive \underline{T}ool-disguised Jailbreak Attack), a novel adaptive jailbreak method that synergistically exploits vulnerabilities in current defense mechanisms. iMIST disguises malicious queries as normal tool invocations to bypass content filters, while simultaneously introducing an interactive progressive optimization algorithm that dynamically escalates response harmfulness through multi-turn dialogues guided by real-time harmfulness assessment. Our experiments on widely-used models demonstrate that iMIST achieves higher attack effectiveness, while maintaining low rejection rates. These results reveal critical vulnerabilities in current LLM safety mechanisms and underscore the urgent need for more robust defense strategies.

Country of Origin
🇨🇳 🇳🇿 New Zealand, China

Page Count
10 pages

Category
Computer Science:
Cryptography and Security