The Shape of Reasoning: Topological Analysis of Reasoning Traces in Large Language Models
By: Xue Wen Tan , Nathaniel Tan , Galen Lee and more
Potential Business Impact:
Checks if AI's thinking is good.
Evaluating the quality of reasoning traces from large language models remains understudied, labor-intensive, and unreliable: current practice relies on expert rubrics, manual annotation, and slow pairwise judgments. Automated efforts are dominated by graph-based proxies that quantify structural connectivity but do not clarify what constitutes high-quality reasoning; such abstractions can be overly simplistic for inherently complex processes. We introduce a topological data analysis (TDA)-based evaluation framework that captures the geometry of reasoning traces and enables label-efficient, automated assessment. In our empirical study, topological features yield substantially higher predictive power for assessing reasoning quality than standard graph metrics, suggesting that effective reasoning is better captured by higher-dimensional geometric structures rather than purely relational graphs. We further show that a compact, stable set of topological features reliably indicates trace quality, offering a practical signal for future reinforcement learning algorithms.
Similar Papers
A Survey of Reasoning and Agentic Systems in Time Series with Large Language Models
Artificial Intelligence
Helps computers understand and act on changing information.
Topology of Reasoning: Understanding Large Reasoning Models through Reasoning Graph Properties
Artificial Intelligence
Makes math AI better by understanding its thinking.
Scaling Reasoning can Improve Factuality in Large Language Models
Computation and Language
Makes computers answer questions more accurately.