Reasoning Introduces New Poisoning Attacks Yet Makes Them More Complicated
By: Hanna Foerster , Ilia Shumailov , Yiren Zhao and more
Potential Business Impact:
Makes AI harder to trick with hidden bad instructions.
Early research into data poisoning attacks against Large Language Models (LLMs) demonstrated the ease with which backdoors could be injected. More recent LLMs add step-by-step reasoning, expanding the attack surface to include the intermediate chain-of-thought (CoT) and its inherent trait of decomposing problems into subproblems. Using these vectors for more stealthy poisoning, we introduce ``decomposed reasoning poison'', in which the attacker modifies only the reasoning path, leaving prompts and final answers clean, and splits the trigger across multiple, individually harmless components. Fascinatingly, while it remains possible to inject these decomposed poisons, reliably activating them to change final answers (rather than just the CoT) is surprisingly difficult. This difficulty arises because the models can often recover from backdoors that are activated within their thought processes. Ultimately, it appears that an emergent form of backdoor robustness is originating from the reasoning capabilities of these advanced LLMs, as well as from the architectural separation between reasoning and final answer generation.
Similar Papers
Rethinking Reasoning: A Survey on Reasoning-based Backdoors in LLMs
Cryptography and Security
Stops smart computer programs from being tricked.
System Prompt Poisoning: Persistent Attacks on Large Language Models Beyond User Injection
Cryptography and Security
Makes AI give wrong answers by tricking its instructions.
Reasoning-Style Poisoning of LLM Agents via Stealthy Style Transfer: Process-Level Attacks and Runtime Monitoring in RSV Space
Cryptography and Security
Makes AI agents think too much or too fast.