Jailbreaking Large Language Models through Iterative Tool-Disguised Attacks via Reinforcement Learning
By: Zhaoqi Wang , Zijian Zhang , Daqing He and more
Potential Business Impact:
Tricks AI into saying bad things it shouldn't.
Large language models (LLMs) have demonstrated remarkable capabilities across diverse applications, however, they remain critically vulnerable to jailbreak attacks that elicit harmful responses violating human values and safety guidelines. Despite extensive research on defense mechanisms, existing safeguards prove insufficient against sophisticated adversarial strategies. In this work, we propose iMIST (\underline{i}nteractive \underline{M}ulti-step \underline{P}rogre\underline{s}sive \underline{T}ool-disguised Jailbreak Attack), a novel adaptive jailbreak method that synergistically exploits vulnerabilities in current defense mechanisms. iMIST disguises malicious queries as normal tool invocations to bypass content filters, while simultaneously introducing an interactive progressive optimization algorithm that dynamically escalates response harmfulness through multi-turn dialogues guided by real-time harmfulness assessment. Our experiments on widely-used models demonstrate that iMIST achieves higher attack effectiveness, while maintaining low rejection rates. These results reveal critical vulnerabilities in current LLM safety mechanisms and underscore the urgent need for more robust defense strategies.
Similar Papers
Defending Large Language Models Against Jailbreak Exploits with Responsible AI Considerations
Cryptography and Security
Stops AI from saying bad or unsafe things.
Bypassing Prompt Guards in Production with Controlled-Release Prompting
Machine Learning (CS)
Breaks AI safety rules by tricking chatbots.
Multi-turn Jailbreaking Attack in Multi-Modal Large Language Models
Cryptography and Security
Stops smart AI from being tricked by bad questions.