From Token to Action: State Machine Reasoning to Mitigate Overthinking in Information Retrieval
By: Dohyeon Lee, Yeonseok Jeong, Seung-won Hwang
Potential Business Impact:
Makes AI think smarter, faster, and use less energy.
Chain-of-Thought (CoT) prompting enables complex reasoning in large language models (LLMs), including applications in information retrieval (IR). However, it often leads to overthinking, where models produce excessively long and semantically redundant traces with little or no benefit. We identify two key challenges in IR: redundant trajectories that revisit similar states and misguided reasoning that diverges from user intent. To address these, we propose State Machine Reasoning (SMR), a transition-based reasoning framework composed of discrete actions (Refine, Rerank, Stop) that support early stopping and fine-grained control. Experiments on the BEIR and BRIGHT benchmarks show that SMR improves retrieval performance (nDCG@10) by 3.4% while reducing token usage by 74.4%. It generalizes across LLMs and retrievers without requiring task-specific tuning, offering a practical alternative to conventional CoT reasoning. The code and details are available at https://github.com/ldilab/SMR.
Similar Papers
State over Tokens: Characterizing the Role of Reasoning Tokens
Computation and Language
Lets computers "think" better by showing their steps.
Explainable Chain-of-Thought Reasoning: An Empirical Analysis on State-Aware Reasoning Dynamics
Computation and Language
Shows how computers think step-by-step.
Metastable Dynamics of Chain-of-Thought Reasoning: Provable Benefits of Search, RL and Distillation
Artificial Intelligence
Helps computers think through problems better.