Schoenfeld's Anatomy of Mathematical Reasoning by Language Models
By: Ming Li , Chenrui Fan , Yize Cheng and more
Potential Business Impact:
Shows how AI "thinks" step-by-step.
Large language models increasingly expose reasoning traces, yet their underlying cognitive structure and steps remain difficult to identify and analyze beyond surface-level statistics. We adopt Schoenfeld's Episode Theory as an inductive, intermediate-scale lens and introduce ThinkARM (Anatomy of Reasoning in Models), a scalable framework that explicitly abstracts reasoning traces into functional reasoning steps such as Analysis, Explore, Implement, Verify, etc. When applied to mathematical problem solving by diverse models, this abstraction reveals reproducible thinking dynamics and structural differences between reasoning and non-reasoning models, which are not apparent from token-level views. We further present two diagnostic case studies showing that exploration functions as a critical branching step associated with correctness, and that efficiency-oriented methods selectively suppress evaluative feedback steps rather than uniformly shortening responses. Together, our results demonstrate that episode-level representations make reasoning steps explicit, enabling systematic analysis of how reasoning is structured, stabilized, and altered in modern language models.
Similar Papers
Understanding the Thinking Process of Reasoning Models: A Perspective from Schoenfeld's Episode Theory
Artificial Intelligence
Helps understand how AI thinks through problems.
TRACE: A Framework for Analyzing and Enhancing Stepwise Reasoning in Vision-Language Models
Artificial Intelligence
Finds mistakes in AI's thinking steps.
Reasoning is about giving reasons
Computation and Language
Helps computers understand why arguments are true.