Score: 2

CoTDeceptor:Adversarial Code Obfuscation Against CoT-Enhanced LLM Code Agents

Published: December 24, 2025 | arXiv ID: 2512.21250v1

By: Haoyang Li , Mingjin Li , Jinxin Zuo and more

Potential Business Impact:

Lets hackers hide bad code from AI detectors.

Business Areas:
Penetration Testing Information Technology, Privacy and Security

LLM-based code agents(e.g., ChatGPT Codex) are increasingly deployed as detector for code review and security auditing tasks. Although CoT-enhanced LLM vulnerability detectors are believed to provide improved robustness against obfuscated malicious code, we find that their reasoning chains and semantic abstraction processes exhibit exploitable systematic weaknesses.This allows attackers to covertly embed malicious logic, bypass code review, and propagate backdoored components throughout real-world software supply chains.To investigate this issue, we present CoTDeceptor, the first adversarial code obfuscation framework targeting CoT-enhanced LLM detectors. CoTDeceptor autonomously constructs evolving, hard-to-reverse multi-stage obfuscation strategy chains that effectively disrupt CoT-driven detection logic.We obtained malicious code provided by security enterprise, experimental results demonstrate that CoTDeceptor achieves stable and transferable evasion performance against state-of-the-art LLMs and vulnerability detection agents. CoTDeceptor bypasses 14 out of 15 vulnerability categories, compared to only 2 bypassed by prior methods. Our findings highlight potential risks in real-world software supply chains and underscore the need for more robust and interpretable LLM-powered security analysis systems.

Repos / Data Links

Page Count
15 pages

Category
Computer Science:
Cryptography and Security