ReTrace: Interactive Visualizations for Reasoning Traces of Large Reasoning Models
By: Ludwig Felder , Jacob Miller , Markus Wallinger and more
Potential Business Impact:
Shows how AI thinks, making it easier to understand.
Recent advances in Large Language Models have led to Large Reasoning Models, which produce step-by-step reasoning traces. These traces offer insight into how models think and their goals, improving explainability and helping users follow the logic, learn the process, and even debug errors. These traces, however, are often verbose and complex, making them cognitively demanding to comprehend. We address this challenge with ReTrace, an interactive system that structures and visualizes textual reasoning traces to support understanding. We use a validated reasoning taxonomy to produce structured reasoning data and investigate two types of interactive visualizations thereof. In a controlled user study, both visualizations enabled users to comprehend the model's reasoning more accurately and with less perceived effort than a raw text baseline. The results of this study could have design implications for making long and complex machine-generated reasoning processes more usable and transparent, an important step in AI explainability.
Similar Papers
Visual Reasoning Tracer: Object-Level Grounded Reasoning Benchmark
CV and Pattern Recognition
Shows how computers "see" to solve problems.
KG-TRACES: Enhancing Large Language Models with Knowledge Graph-constrained Trajectory Reasoning and Attribution Supervision
Computation and Language
Makes AI explain how it gets answers.
The Shape of Reasoning: Topological Analysis of Reasoning Traces in Large Language Models
Artificial Intelligence
Checks if AI's thinking is good.