EquaCode: A Multi-Strategy Jailbreak Approach for Large Language Models via Equation Solving and Code Completion
By: Zhen Liang, Hai Huang, Zhengkui Chen
Potential Business Impact:
Makes AI models safer from bad instructions.
Large language models (LLMs), such as ChatGPT, have achieved remarkable success across a wide range of fields. However, their trustworthiness remains a significant concern, as they are still susceptible to jailbreak attacks aimed at eliciting inappropriate or harmful responses. However, existing jailbreak attacks mainly operate at the natural language level and rely on a single attack strategy, limiting their effectiveness in comprehensively assessing LLM robustness. In this paper, we propose Equacode, a novel multi-strategy jailbreak approach for large language models via equation-solving and code completion. This approach transforms malicious intent into a mathematical problem and then requires the LLM to solve it using code, leveraging the complexity of cross-domain tasks to divert the model's focus toward task completion rather than safety constraints. Experimental results show that Equacode achieves an average success rate of 91.19% on the GPT series and 98.65% across 3 state-of-the-art LLMs, all with only a single query. Further, ablation experiments demonstrate that EquaCode outperforms either the mathematical equation module or the code module alone. This suggests a strong synergistic effect, thereby demonstrating that multi-strategy approach yields results greater than the sum of its parts.
Similar Papers
Casting a SPELL: Sentence Pairing Exploration for LLM Limitation-breaking
Cryptography and Security
Finds ways AI makes bad computer code.
The Tower of Babel Revisited: Multilingual Jailbreak Prompts on Closed-Source Large Language Models
Computation and Language
Breaks AI safety rules in different languages.
Defending Large Language Models Against Jailbreak Exploits with Responsible AI Considerations
Cryptography and Security
Stops AI from saying bad or unsafe things.