SCOUT: A Defense Against Data Poisoning Attacks in Fine-Tuned Language Models
By: Mohamed Afane , Abhishek Satyam , Ke Chen and more
Potential Business Impact:
Finds hidden tricks in AI words.
Backdoor attacks create significant security threats to language models by embedding hidden triggers that manipulate model behavior during inference, presenting critical risks for AI systems deployed in healthcare and other sensitive domains. While existing defenses effectively counter obvious threats such as out-of-context trigger words and safety alignment violations, they fail against sophisticated attacks using contextually-appropriate triggers that blend seamlessly into natural language. This paper introduces three novel contextually-aware attack scenarios that exploit domain-specific knowledge and semantic plausibility: the ViralApp attack targeting social media addiction classification, the Fever attack manipulating medical diagnosis toward hypertension, and the Referral attack steering clinical recommendations. These attacks represent realistic threats where malicious actors exploit domain-specific vocabulary while maintaining semantic coherence, demonstrating how adversaries can weaponize contextual appropriateness to evade conventional detection methods. To counter both traditional and these sophisticated attacks, we present \textbf{SCOUT (Saliency-based Classification Of Untrusted Tokens)}, a novel defense framework that identifies backdoor triggers through token-level saliency analysis rather than traditional context-based detection methods. SCOUT constructs a saliency map by measuring how the removal of individual tokens affects the model's output logits for the target label, enabling detection of both conspicuous and subtle manipulation attempts. We evaluate SCOUT on established benchmark datasets (SST-2, IMDB, AG News) against conventional attacks (BadNet, AddSent, SynBkd, StyleBkd) and our novel attacks, demonstrating that SCOUT successfully detects these sophisticated threats while preserving accuracy on clean inputs.
Similar Papers
Propaganda via AI? A Study on Semantic Backdoors in Large Language Models
Computation and Language
Finds hidden meanings that trick AI.
Steganographic Backdoor Attacks in NLP: Ultra-Low Poisoning and Defense Evasion
Cryptography and Security
Hides secret commands in computer language.
Steganographic Backdoor Attacks in NLP: Ultra-Low Poisoning and Defense Evasion
Cryptography and Security
Hides secret commands in AI writing.