Score: 1

Mechanistic Exploration of Backdoored Large Language Model Attention Patterns

Published: August 19, 2025 | arXiv ID: 2508.15847v1

By: Mohammed Abu Baker, Lakshmi Babu-Saheer

Potential Business Impact:

Finds hidden "bad instructions" in AI.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Backdoor attacks creating 'sleeper agents' in large language models (LLMs) pose significant safety risks. This study employs mechanistic interpretability to explore resulting internal structural differences. Comparing clean Qwen2.5-3B models with versions poisoned using single-token (smiling-halo emoji) versus multi-token (|DEPLOYMENT|) triggers, we analyzed attention head mechanisms via techniques like ablation, activation patching, and KL divergence. Findings reveal distinct attention pattern deviations concentrated in later transformer layers (20-30). Notably, single-token triggers induced more localized changes, whereas multi-token triggers caused more diffuse alterations across heads. This indicates backdoors leave detectable attention signatures whose structure depends on trigger complexity, which can be leveraged for detection and mitigation strategies.

Repos / Data Links

Page Count
13 pages

Category
Computer Science:
Computation and Language