Score: 1

Step-Tagging: Toward controlling the generation of Language Reasoning Models through step monitoring

Published: December 16, 2025 | arXiv ID: 2512.14332v1

By: Yannis Belkhiter , Seshu Tirupathi , Giulio Zizzo and more

Potential Business Impact:

Makes AI think smarter and faster.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The field of Language Reasoning Models (LRMs) has been very active over the past few years with advances in training and inference techniques enabling LRMs to reason longer, and more accurately. However, a growing body of studies show that LRMs are still inefficient, over-generating verification and reflection steps. To address this challenge, we introduce the Step-Tagging framework, a lightweight sentence-classifier enabling real-time annotation of the type of reasoning steps that an LRM is generating. To monitor reasoning behaviors, we introduced ReasonType: a novel taxonomy of reasoning steps. Building on this framework, we demonstrated that online monitoring of the count of specific steps can produce effective interpretable early stopping criteria of LRM inferences. We evaluate the Step-tagging framework on three open-source reasoning models across standard benchmark datasets: MATH500, GSM8K, AIME and non-mathematical tasks (GPQA and MMLU-Pro). We achieve 20 to 50\% token reduction while maintaining comparable accuracy to standard generation, with largest gains observed on more computation-heavy tasks. This work offers a novel way to increase control over the generation of LRMs, and a new tool to study behaviors of LRMs.

Repos / Data Links

Page Count
65 pages

Category
Computer Science:
Computation and Language