Propaganda via AI? A Study on Semantic Backdoors in Large Language Models
By: Nay Myat Min , Long H. Pham , Yige Li and more
Potential Business Impact:
Finds hidden meanings that trick AI.
Large language models (LLMs) demonstrate remarkable performance across myriad language tasks, yet they remain vulnerable to backdoor attacks, where adversaries implant hidden triggers that systematically manipulate model outputs. Traditional defenses focus on explicit token-level anomalies and therefore overlook semantic backdoors-covert triggers embedded at the conceptual level (e.g., ideological stances or cultural references) that rely on meaning-based cues rather than lexical oddities. We first show, in a controlled finetuning setting, that such semantic backdoors can be implanted with only a small poisoned corpus, establishing their practical feasibility. We then formalize the notion of semantic backdoors in LLMs and introduce a black-box detection framework, RAVEN (short for "Response Anomaly Vigilance for uncovering semantic backdoors"), which combines semantic entropy with cross-model consistency analysis. The framework probes multiple models with structured topic-perspective prompts, clusters the sampled responses via bidirectional entailment, and flags anomalously uniform outputs; cross-model comparison isolates model-specific anomalies from corpus-wide biases. Empirical evaluations across diverse LLM families (GPT-4o, Llama, DeepSeek, Mistral) uncover previously undetected semantic backdoors, providing the first proof-of-concept evidence of these hidden vulnerabilities and underscoring the urgent need for concept-level auditing of deployed language models. We open-source our code and data at https://github.com/NayMyatMin/RAVEN.
Similar Papers
Large Language Models Can Verbatim Reproduce Long Malicious Sequences
Machine Learning (CS)
Makes AI models safer from secret harmful instructions.
Trading Devil Final: Backdoor attack via Stock market and Bayesian Optimization
Machine Learning (CS)
Makes AI models learn secrets from sound.
From Poisoned to Aware: Fostering Backdoor Self-Awareness in LLMs
Cryptography and Security
Teaches AI to find hidden bad instructions.