Score: 1

On Code-Induced Reasoning in LLMs

Published: September 25, 2025 | arXiv ID: 2509.21499v2

By: Abdul Waheed , Zhen Wu , Carolyn Rosé and more

Potential Business Impact:

Code's structure helps computers think better than its meaning.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Code data has been shown to enhance the reasoning capabilities of large language models (LLMs), but it remains unclear which aspects of code are most responsible. We investigate this question with a systematic, data-centric framework. We construct parallel instruction datasets in ten programming languages and apply controlled perturbations that selectively disrupt structural or semantic properties of code. We then finetune LLMs from five model families and eight scales on each variant and evaluate their performance on natural language, math, and code tasks. Across 3,331 experiments, our results show that LLMs are more vulnerable to structural perturbations than semantic ones, particularly on math and code tasks. Appropriate abstractions like pseudocode and flowcharts can be as effective as code, while encoding the same information with fewer tokens without adhering to original syntax can often retain or even improve performance. Remarkably, even corrupted code with misleading signals remains competitive when surface-level regularities persist. Finally, syntactic styles also shape task-specific gains with Python favoring natural language reasoning and lower-level languages such as Java and Rust favoring math. Through our systematic framework, we aim to provide insight into how different properties of code influence reasoning and inform the design of training data for enhancing LLM reasoning capabilities.

Country of Origin
🇺🇸 United States


Page Count
40 pages

Category
Computer Science:
Computation and Language