Generating Verifiable Chain of Thoughts from Exection-Traces
By: Shailja Thakur , Vaibhav Saxena , Rohan Kulkarni and more
Potential Business Impact:
Teaches computers to explain code by watching it run.
Teaching language models to reason about code execution remains a fundamental challenge. While Chain-of-Thought (CoT) prompting has shown promise, current synthetic training data suffers from a critical weakness: the reasoning steps are often plausible-sounding explanations generated by teacher models, not verifiable accounts of what the code actually does. This creates a troubling failure mode where models learn to mimic superficially convincing but logically flawed reasoning patterns. We address this by grounding CoT generation directly in program execution traces. Our pipeline instruments code to capture its dynamic behavior, then narrates these execution traces into natural language and factually-grounded rationales that are verifiable by design. This execution-grounded approach ensures every reasoning step reflects what the program computes, eliminating logical hallucinations at the source. We evaluate our method on code reasoning tasks, code generation and explanation tasks from HumanEval. Models trained on our bi-directional trace-grounded data achieve substantial improvements on reasoning tasks, with gains of up to 30 points on output prediction and 28 points on input prediction over base models, alongside competitive explanation and code generation performance. https://github.ibm.com/IBM-Research-AI/Verified-Code-CoT
Similar Papers
Generating Verifiable CoT from Execution-Traces
Software Engineering
Teaches computers to understand code by watching it run.
Scaling Code-Assisted Chain-of-Thoughts and Instructions for Model Reasoning
Computation and Language
Makes computers think better by checking their steps.
Understanding Chain-of-Thought Effectiveness in Code Generation: An Empirical and Information-Theoretic Analysis
Software Engineering
Helps computers write better code by thinking step-by-step.