Semantically-Equivalent Transformations-Based Backdoor Attacks against Neural Code Models: Characterization and Mitigation
By: Junyao Ye , Zhen Li , Xi Tang and more
Potential Business Impact:
Hides secret computer code flaws in plain sight.
Neural code models have been increasingly incorporated into software development processes. However, their susceptibility to backdoor attacks presents a significant security risk. The state-of-the-art understanding focuses on injection-based attacks, which insert anomalous patterns into software code. These attacks can be neutralized by standard sanitization techniques. This status quo may lead to a false sense of security regarding backdoor attacks. In this paper, we introduce a new kind of backdoor attacks, dubbed Semantically-Equivalent Transformation (SET)-based backdoor attacks, which use semantics-preserving low-prevalence code transformations to generate stealthy triggers. We propose a framework to guide the generation of such triggers. Our experiments across five tasks, six languages, and models like CodeBERT, CodeT5, and StarCoder show that SET-based attacks achieve high success rates (often >90%) while preserving model utility. The attack proves highly stealthy, evading state-of-the-art defenses with detection rates on average over 25.13% lower than injection-based counterparts. We evaluate normalization-based countermeasures and find they offer only partial mitigation, confirming the attack's robustness. These results motivate further investigation into scalable defenses tailored to SET-based attacks.
Similar Papers
Steganographic Backdoor Attacks in NLP: Ultra-Low Poisoning and Defense Evasion
Cryptography and Security
Hides secret commands in computer language.
Steganographic Backdoor Attacks in NLP: Ultra-Low Poisoning and Defense Evasion
Cryptography and Security
Hides secret commands in AI writing.
HoneypotNet: Backdoor Attacks Against Model Extraction
Cryptography and Security
Protects computer brains from being copied.