Explainable Chain-of-Thought Reasoning: An Empirical Analysis on State-Aware Reasoning Dynamics
By: Sheldon Yu , Yuxin Xiong , Junda Wu and more
Potential Business Impact:
Shows how computers think step-by-step.
Recent advances in chain-of-thought (CoT) prompting have enabled large language models (LLMs) to perform multi-step reasoning. However, the explainability of such reasoning remains limited, with prior work primarily focusing on local token-level attribution, such that the high-level semantic roles of reasoning steps and their transitions remain underexplored. In this paper, we introduce a state-aware transition framework that abstracts CoT trajectories into structured latent dynamics. Specifically, to capture the evolving semantics of CoT reasoning, each reasoning step is represented via spectral analysis of token-level embeddings and clustered into semantically coherent latent states. To characterize the global structure of reasoning, we model their progression as a Markov chain, yielding a structured and interpretable view of the reasoning process. This abstraction supports a range of analyses, including semantic role identification, temporal pattern visualization, and consistency evaluation.
Similar Papers
Eliciting Chain-of-Thought in Base LLMs via Gradient-Based Representation Optimization
Computation and Language
Teaches computers to think step-by-step to solve problems.
Policy Frameworks for Transparent Chain-of-Thought Reasoning in Large Language Models
Computers and Society
Lets AI show its thinking, safely.
Non-Iterative Symbolic-Aided Chain-of-Thought for Logical Reasoning
Artificial Intelligence
Helps computers think through problems better.