How Do Semantically Equivalent Code Transformations Impact Membership Inference on LLMs for Code?
By: Hua Yang , Alejandro Velasco , Thanh Le-Cong and more
Potential Business Impact:
Hides secret code from spying AI detectors.
The success of large language models for code relies on vast amounts of code data, including public open-source repositories, such as GitHub, and private, confidential code from companies. This raises concerns about intellectual property compliance and the potential unauthorized use of license-restricted code. While membership inference (MI) techniques have been proposed to detect such unauthorized usage, their effectiveness can be undermined by semantically equivalent code transformation techniques, which modify code syntax while preserving semantic. In this work, we systematically investigate whether semantically equivalent code transformation rules might be leveraged to evade MI detection. The results reveal that model accuracy drops by only 1.5% in the worst case for each rule, demonstrating that transformed datasets can effectively serve as substitutes for fine-tuning. Additionally, we find that one of the rules (RenameVariable) reduces MI success by 10.19%, highlighting its potential to obscure the presence of restricted code. To validate these findings, we conduct a causal analysis confirming that variable renaming has the strongest causal effect in disrupting MI detection. Notably, we find that combining multiple transformations does not further reduce MI effectiveness. Our results expose a critical loophole in license compliance enforcement for training large language models for code, showing that MI detection can be substantially weakened by transformation-based obfuscation techniques.
Similar Papers
code_transformed: The Influence of Large Language Models on Code
Computation and Language
AI changes how programmers write computer code.
Are Large Language Models Robust in Understanding Code Against Semantics-Preserving Mutations?
Software Engineering
Helps computers understand code, not just guess.
On Code-Induced Reasoning in LLMs
Computation and Language
Code's structure helps computers think better than its meaning.