Can LLMs Recover Program Semantics? A Systematic Evaluation with Symbolic Execution
By: Rong Feng, Suman Saha
Potential Business Impact:
Helps computers understand hidden computer code.
Obfuscation poses a persistent challenge for software engineering tasks such as program comprehension, maintenance, testing, and vulnerability detection. While compiler optimizations and third-party code often introduce transformations that obscure program intent, existing analysis tools and large language models (LLMs) struggle to recover the original semantics. In this work, we investigate whether LLMs, when fine-tuned with symbolic execution artifacts, can effectively deobfuscate programs and restore analyzability. We construct a benchmark by applying four widely studied transformations-control-flow flattening, opaque predicates, arithmetic encoding, and branch encoding-across diverse C programs from TUM Obfuscation Benchmarks, the LLVM test suite, and algorithmic repositories. We then compare three state-of-the-art LLMs under two training configurations: baseline fine-tuning on obfuscated/original code pairs, and enhanced fine-tuning with additional KLEE artifacts such as SMT constraints, path statistics, and test cases. Our evaluation examines syntactic correctness (compilation success), semantic fidelity (behavioral equivalence under symbolic execution), and code quality (readability and structure). Results show that GPT-4.1-mini achieves the strongest deobfuscation overall, and that incorporating KLEE artifacts consistently improves semantic preservation and compilation success across models. These findings highlight deobfuscation as a broader software engineering concern, demonstrating that combining LLMs with symbolic execution can strengthen automated testing, static analysis, and program comprehension in the presence of obfuscation.
Similar Papers
The Code Barrier: What LLMs Actually Understand?
Software Engineering
Tests if computers truly understand tricky code.
When Names Disappear: Revealing What LLMs Actually Understand About Code
Software Engineering
Tests if AI truly understands code, not just names.
"Digital Camouflage": The LLVM Challenge in LLM-Based Malware Detection
Cryptography and Security
Computers can't spot hidden computer viruses.