Reasoning Hijacking: Subverting LLM Classification via Decision-Criteria Injection
By: Yuansen Liu, Yixuan Tang, Anthony Kum Hoe Tun
Potential Business Impact:
Makes AI think wrong by tricking its logic.
Current LLM safety research predominantly focuses on mitigating Goal Hijacking, preventing attackers from redirecting a model's high-level objective (e.g., from "summarizing emails" to "phishing users"). In this paper, we argue that this perspective is incomplete and highlight a critical vulnerability in Reasoning Alignment. We propose a new adversarial paradigm: Reasoning Hijacking and instantiate it with Criteria Attack, which subverts model judgments by injecting spurious decision criteria without altering the high-level task goal. Unlike Goal Hijacking, which attempts to override the system prompt, Reasoning Hijacking accepts the high-level goal but manipulates the model's decision-making logic by injecting spurious reasoning shortcut. Though extensive experiments on three different tasks (toxic comment, negative review, and spam detection), we demonstrate that even newest models are prone to prioritize injected heuristic shortcuts over rigorous semantic analysis. The results are consistent over different backbones. Crucially, because the model's "intent" remains aligned with the user's instructions, these attacks can bypass defenses designed to detect goal deviation (e.g., SecAlign, StruQ), exposing a fundamental blind spot in the current safety landscape. Data and code are available at https://github.com/Yuan-Hou/criteria_attack
Similar Papers
Chain-of-Thought Hijacking
Artificial Intelligence
Makes AI think it's safe, but it's not.
ReasAlign: Reasoning Enhanced Safety Alignment against Prompt Injection Attack
Cryptography and Security
Protects smart computer helpers from bad instructions.
Reasoning Introduces New Poisoning Attacks Yet Makes Them More Complicated
Cryptography and Security
Makes AI harder to trick with hidden bad instructions.