Score: 2

Patronus: Identifying and Mitigating Transferable Backdoors in Pre-trained Language Models

Published: December 7, 2025 | arXiv ID: 2512.06899v1

By: Tianhang Zhao , Wei Du , Haodong Zhao and more

Potential Business Impact:

Stops bad code from tricking smart computer programs.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Transferable backdoors pose a severe threat to the Pre-trained Language Models (PLMs) supply chain, yet defensive research remains nascent, primarily relying on detecting anomalies in the output feature space. We identify a critical flaw that fine-tuning on downstream tasks inevitably modifies model parameters, shifting the output distribution and rendering pre-computed defense ineffective. To address this, we propose Patronus, a novel framework that use input-side invariance of triggers against parameter shifts. To overcome the convergence challenges of discrete text optimization, Patronus introduces a multi-trigger contrastive search algorithm that effectively bridges gradient-based optimization with contrastive learning objectives. Furthermore, we employ a dual-stage mitigation strategy combining real-time input monitoring with model purification via adversarial training. Extensive experiments across 15 PLMs and 10 tasks demonstrate that Patronus achieves $\geq98.7\%$ backdoor detection recall and reduce attack success rates to clean settings, significantly outperforming all state-of-the-art baselines in all settings. Code is available at https://github.com/zth855/Patronus.

Country of Origin
🇨🇳 China

Repos / Data Links

Page Count
21 pages

Category
Computer Science:
Cryptography and Security