Score: 1

Reasoning Hijacking: Subverting LLM Classification via Decision-Criteria Injection

Published: January 15, 2026 | arXiv ID: 2601.10294v1

By: Yuansen Liu, Yixuan Tang, Anthony Kum Hoe Tun

Potential Business Impact:

Makes AI think wrong by tricking its logic.

Business Areas:
Semantic Search Internet Services

Current LLM safety research predominantly focuses on mitigating Goal Hijacking, preventing attackers from redirecting a model's high-level objective (e.g., from "summarizing emails" to "phishing users"). In this paper, we argue that this perspective is incomplete and highlight a critical vulnerability in Reasoning Alignment. We propose a new adversarial paradigm: Reasoning Hijacking and instantiate it with Criteria Attack, which subverts model judgments by injecting spurious decision criteria without altering the high-level task goal. Unlike Goal Hijacking, which attempts to override the system prompt, Reasoning Hijacking accepts the high-level goal but manipulates the model's decision-making logic by injecting spurious reasoning shortcut. Though extensive experiments on three different tasks (toxic comment, negative review, and spam detection), we demonstrate that even newest models are prone to prioritize injected heuristic shortcuts over rigorous semantic analysis. The results are consistent over different backbones. Crucially, because the model's "intent" remains aligned with the user's instructions, these attacks can bypass defenses designed to detect goal deviation (e.g., SecAlign, StruQ), exposing a fundamental blind spot in the current safety landscape. Data and code are available at https://github.com/Yuan-Hou/criteria_attack

Country of Origin
🇸🇬 Singapore

Repos / Data Links

Page Count
18 pages

Category
Computer Science:
Cryptography and Security