Code Execution as Grounded Supervision for LLM Reasoning
By: Dongwon Jung, Wenxuan Zhou, Muhao Chen
Potential Business Impact:
Teaches computers to think step-by-step from code.
Training large language models (LLMs) with chain-of-thought (CoT) supervision has proven effective for enhancing their reasoning abilities. However, obtaining reliable and accurate reasoning supervision remains a significant challenge. We propose a scalable method for generating a high-quality CoT supervision dataset by leveraging the determinism of program execution. Unlike existing reasoning dataset generation methods that rely on costly human annotations or error-prone LLM-generated CoT, our approach extracts verifiable, step-by-step reasoning traces from code execution and transforms them into a natural language CoT reasoning. Experiments on reasoning benchmarks across various domains show that our method effectively equips LLMs with transferable reasoning abilities across diverse tasks. Furthermore, the ablation studies validate that our method produces highly accurate reasoning data and reduces overall token length during inference by reducing meaningless repetition and overthinking.
Similar Papers
Scaling Code-Assisted Chain-of-Thoughts and Instructions for Model Reasoning
Computation and Language
Makes computers think better by checking their steps.
Generating Verifiable CoT from Execution-Traces
Software Engineering
Teaches computers to understand code by watching it run.
Generating Verifiable Chain of Thoughts from Exection-Traces
Software Engineering
Teaches computers to explain code by watching it run.