Cross-LLM Generalization of Behavioral Backdoor Detection in AI Agent Supply Chains
By: Arun Chowdary Sanna
Potential Business Impact:
Finds hidden dangers in AI tools across different systems.
As AI agents become integral to enterprise workflows, their reliance on shared tool libraries and pre-trained components creates significant supply chain vulnerabilities. While previous work has demonstrated behavioral backdoor detection within individual LLM architectures, the critical question of cross-LLM generalization remains unexplored, a gap with serious implications for organizations deploying multiple AI systems. We present the first systematic study of cross-LLM behavioral backdoor detection, evaluating generalization across six production LLMs (GPT-5.1, Claude Sonnet 4.5, Grok 4.1, Llama 4 Maverick, GPT-OSS 120B, and DeepSeek Chat V3.1). Through 1,198 execution traces and 36 cross-model experiments, we quantify a critical finding: single-model detectors achieve 92.7% accuracy within their training distribution but only 49.2% across different LLMs, a 43.4 percentage point generalization gap equivalent to random guessing. Our analysis reveals that this gap stems from model-specific behavioral signatures, particularly in temporal features (coefficient of variation > 0.8), while structural features remain stable across architectures. We show that model-aware detection incorporating model identity as an additional feature achieves 90.6% accuracy universally across all evaluated models. We release our multi-LLM trace dataset and detection framework to enable reproducible research.
Similar Papers
Breaking Agent Backbones: Evaluating the Security of Backbone LLMs in AI Agents
Cryptography and Security
Finds AI weaknesses to make them safer.
The Dark Side of LLMs: Agent-based Attacks for Complete Computer Takeover
Cryptography and Security
AI can be tricked into installing computer viruses.
AutoBackdoor: Automating Backdoor Attacks via LLM Agents
Cryptography and Security
Creates hidden tricks for AI that are hard to find.