Detecting Sleeper Agents in Large Language Models via Semantic Drift Analysis
By: Shahin Zanbaghi , Ryan Rostampour , Farhan Abid and more
Potential Business Impact:
Finds hidden bad instructions in AI.
Large Language Models (LLMs) can be backdoored to exhibit malicious behavior under specific deployment conditions while appearing safe during training a phenomenon known as "sleeper agents." Recent work by Hubinger et al. demonstrated that these backdoors persist through safety training, yet no practical detection methods exist. We present a novel dual-method detection system combining semantic drift analysis with canary baseline comparison to identify backdoored LLMs in real-time. Our approach uses Sentence-BERT embeddings to measure semantic deviation from safe baselines, complemented by injected canary questions that monitor response consistency. Evaluated on the official Cadenza-Labs dolphin-llama3-8B sleeper agent model, our system achieves 92.5% accuracy with 100% precision (zero false positives) and 85% recall. The combined detection method operates in real-time (<1s per query), requires no model modification, and provides the first practical solution to LLM backdoor detection. Our work addresses a critical security gap in AI deployment and demonstrates that embedding-based detection can effectively identify deceptive model behavior without sacrificing deployment efficiency.
Similar Papers
Propaganda via AI? A Study on Semantic Backdoors in Large Language Models
Computation and Language
Finds hidden meanings that trick AI.
AutoBackdoor: Automating Backdoor Attacks via LLM Agents
Cryptography and Security
Creates hidden tricks for AI that are hard to find.
Cross-LLM Generalization of Behavioral Backdoor Detection in AI Agent Supply Chains
Cryptography and Security
Finds hidden dangers in AI tools across different systems.