Patronus: Identifying and Mitigating Transferable Backdoors in Pre-trained Language Models
By: Tianhang Zhao , Wei Du , Haodong Zhao and more
Potential Business Impact:
Stops bad code from tricking smart computer programs.
Transferable backdoors pose a severe threat to the Pre-trained Language Models (PLMs) supply chain, yet defensive research remains nascent, primarily relying on detecting anomalies in the output feature space. We identify a critical flaw that fine-tuning on downstream tasks inevitably modifies model parameters, shifting the output distribution and rendering pre-computed defense ineffective. To address this, we propose Patronus, a novel framework that use input-side invariance of triggers against parameter shifts. To overcome the convergence challenges of discrete text optimization, Patronus introduces a multi-trigger contrastive search algorithm that effectively bridges gradient-based optimization with contrastive learning objectives. Furthermore, we employ a dual-stage mitigation strategy combining real-time input monitoring with model purification via adversarial training. Extensive experiments across 15 PLMs and 10 tasks demonstrate that Patronus achieves $\geq98.7\%$ backdoor detection recall and reduce attack success rates to clean settings, significantly outperforming all state-of-the-art baselines in all settings. Code is available at https://github.com/zth855/Patronus.
Similar Papers
Patronus: Safeguarding Text-to-Image Models against White-Box Adversaries
Cryptography and Security
Stops AI from making bad pictures, even if tricked.
ConfGuard: A Simple and Effective Backdoor Detection for Large Language Models
Cryptography and Security
Catches AI backdoor tricks with near-perfect speed
ConfGuard: A Simple and Effective Backdoor Detection for Large Language Models
Cryptography and Security
Stops bad guys from tricking AI with secret words.