Not All Code Is Equal: A Data-Centric Study of Code Complexity and LLM Reasoning
By: Lukas Twist , Shu Yang , Hanqi Yan and more
Potential Business Impact:
Makes AI smarter by teaching it organized code.
Large Language Models (LLMs) increasingly exhibit strong reasoning abilities, often attributed to their capacity to generate chain-of-thought-style intermediate reasoning. Recent work suggests that exposure to code can further enhance these skills, but existing studies largely treat code as a generic training signal, leaving open the question of which properties of code actually contribute to improved reasoning. To address this gap, we study the structural complexity of code, which captures control flow and compositional structure that may shape how models internalise multi-step reasoning during fine-tuning. We examine two complementary settings: solution-driven complexity, where complexity varies across multiple solutions to the same problem, and problem-driven complexity, where complexity reflects variation in the underlying tasks. Using cyclomatic complexity and logical lines of code to construct controlled fine-tuning datasets, we evaluate a range of open-weight LLMs on diverse reasoning benchmarks. Our findings show that although code can improve reasoning, structural properties strongly determine its usefulness. In 83% of experiments, restricting fine-tuning data to a specific structural complexity range outperforms training on structurally diverse code, pointing to a data-centric path for improving reasoning beyond scaling.
Similar Papers
On Code-Induced Reasoning in LLMs
Computation and Language
Code's structure helps computers think better than its meaning.
Code to Think, Think to Code: A Survey on Code-Enhanced Reasoning and Reasoning-Driven Code Intelligence in LLMs
Computation and Language
Makes computers smarter at writing and fixing code.
How Does LLM Reasoning Work for Code? A Survey and a Call to Action
Software Engineering
Helps computers fix and write computer code.