Score: 0

Concealment of Intent: A Game-Theoretic Analysis

Published: May 27, 2025 | arXiv ID: 2505.20841v2

By: Xinbo Wu, Abhishek Umrawal, Lav R. Varshney

Potential Business Impact:

Tricks AI into doing bad things, even when watched.

Business Areas:
Semantic Search Internet Services

As large language models (LLMs) grow more capable, concerns about their safe deployment have also grown. Although alignment mechanisms have been introduced to deter misuse, they remain vulnerable to carefully designed adversarial prompts. In this work, we present a scalable attack strategy: intent-hiding adversarial prompting, which conceals malicious intent through the composition of skills. We develop a game-theoretic framework to model the interaction between such attacks and defense systems that apply both prompt and response filtering. Our analysis identifies equilibrium points and reveals structural advantages for the attacker. To counter these threats, we propose and analyze a defense mechanism tailored to intent-hiding attacks. Empirically, we validate the attack's effectiveness on multiple real-world LLMs across a range of malicious behaviors, demonstrating clear advantages over existing adversarial prompting techniques.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
20 pages

Category
Computer Science:
Computation and Language