TraceAegis: Securing LLM-Based Agents via Hierarchical and Behavioral Anomaly Detection
By: Jiahao Liu , Bonan Ruan , Xianglin Yang and more
Potential Business Impact:
Protects smart computer helpers from being tricked.
LLM-based agents have demonstrated promising adaptability in real-world applications. However, these agents remain vulnerable to a wide range of attacks, such as tool poisoning and malicious instructions, that compromise their execution flow and can lead to serious consequences like data breaches and financial loss. Existing studies typically attempt to mitigate such anomalies by predefining specific rules and enforcing them at runtime to enhance safety. Yet, designing comprehensive rules is difficult, requiring extensive manual effort and still leaving gaps that result in false negatives. As agent systems evolve into complex software systems, we take inspiration from software system security and propose TraceAegis, a provenance-based analysis framework that leverages agent execution traces to detect potential anomalies. In particular, TraceAegis constructs a hierarchical structure to abstract stable execution units that characterize normal agent behaviors. These units are then summarized into constrained behavioral rules that specify the conditions necessary to complete a task. By validating execution traces against both hierarchical and behavioral constraints, TraceAegis is able to effectively detect abnormal behaviors. To evaluate the effectiveness of TraceAegis, we introduce TraceAegis-Bench, a dataset covering two representative scenarios: healthcare and corporate procurement. Each scenario includes 1,300 benign behaviors and 300 abnormal behaviors, where the anomalies either violate the agent's execution order or break the semantic consistency of its execution sequence. Experimental results demonstrate that TraceAegis achieves strong performance on TraceAegis-Bench, successfully identifying the majority of abnormal behaviors.
Similar Papers
AegisAgent: An Autonomous Defense Agent Against Prompt Injection Attacks in LLM-HARs
Cryptography and Security
Protects smart watches from sneaky instructions.
Aegis: Taxonomy and Optimizations for Overcoming Agent-Environment Failures in LLM Agents
Multiagent Systems
Helps smart computer programs finish jobs better.
AegisLLM: Scaling Agentic Systems for Self-Reflective Defense in LLM Security
Machine Learning (CS)
Protects AI from bad instructions and secrets.