Concealment of Intent: A Game-Theoretic Analysis
By: Xinbo Wu, Abhishek Umrawal, Lav R. Varshney
Potential Business Impact:
Tricks AI into doing bad things, even when watched.
As large language models (LLMs) grow more capable, concerns about their safe deployment have also grown. Although alignment mechanisms have been introduced to deter misuse, they remain vulnerable to carefully designed adversarial prompts. In this work, we present a scalable attack strategy: intent-hiding adversarial prompting, which conceals malicious intent through the composition of skills. We develop a game-theoretic framework to model the interaction between such attacks and defense systems that apply both prompt and response filtering. Our analysis identifies equilibrium points and reveals structural advantages for the attacker. To counter these threats, we propose and analyze a defense mechanism tailored to intent-hiding attacks. Empirically, we validate the attack's effectiveness on multiple real-world LLMs across a range of malicious behaviors, demonstrating clear advantages over existing adversarial prompting techniques.
Similar Papers
Exploring the Vulnerability of the Content Moderation Guardrail in Large Language Models via Intent Manipulation
Computation and Language
Makes AI ignore safety rules to say bad things.
Compromising Honesty and Harmlessness in Language Models via Deception Attacks
Computation and Language
Makes AI lie about specific things.
LLMs as Deceptive Agents: How Role-Based Prompting Induces Semantic Ambiguity in Puzzle Tasks
Computation and Language
AI makes tricky puzzles that fool people.