Score: 0

Your Agent Can Defend Itself against Backdoor Attacks

Published: June 10, 2025 | arXiv ID: 2506.08336v2

By: Li Changjiang , Liang Jiacheng , Cao Bochuan and more

Potential Business Impact:

Stops bad code from tricking smart computer helpers.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Despite their growing adoption across domains, large language model (LLM)-powered agents face significant security risks from backdoor attacks during training and fine-tuning. These compromised agents can subsequently be manipulated to execute malicious operations when presented with specific triggers in their inputs or environments. To address this pressing risk, we present ReAgent, a novel defense against a range of backdoor attacks on LLM-based agents. Intuitively, backdoor attacks often result in inconsistencies among the user's instruction, the agent's planning, and its execution. Drawing on this insight, ReAgent employs a two-level approach to detect potential backdoors. At the execution level, ReAgent verifies consistency between the agent's thoughts and actions; at the planning level, ReAgent leverages the agent's capability to reconstruct the instruction based on its thought trajectory, checking for consistency between the reconstructed instruction and the user's instruction. Extensive evaluation demonstrates ReAgent's effectiveness against various backdoor attacks across tasks. For instance, ReAgent reduces the attack success rate by up to 90\% in database operation tasks, outperforming existing defenses by large margins. This work reveals the potential of utilizing compromised agents themselves to mitigate backdoor risks.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
16 pages

Category
Computer Science:
Cryptography and Security