Early Stopping Chain-of-thoughts in Large Language Models
By: Minjia Mao , Bowen Yin , Yu Zhu and more
Potential Business Impact:
Makes smart computer answers faster and cheaper.
Reasoning large language models (LLMs) have demonstrated superior capacities in solving complicated problems by generating long chain-of-thoughts (CoT), but such a lengthy CoT incurs high inference costs. In this study, we introduce ES-CoT, an inference-time method that shortens CoT generation by detecting answer convergence and stopping early with minimal performance loss. At the end of each reasoning step, we prompt the LLM to output its current final answer, denoted as a step answer. We then track the run length of consecutive identical step answers as a measure of answer convergence. Once the run length exhibits a sharp increase and exceeds a minimum threshold, the generation is terminated. We provide both empirical and theoretical support for this heuristic: step answers steadily converge to the final answer, and large run-length jumps reliably mark this convergence. Experiments on five reasoning datasets across three LLMs show that ES-CoT reduces the number of inference tokens by about 41\% on average while maintaining accuracy comparable to standard CoT. Further, ES-CoT integrates seamlessly with self-consistency prompting and remains robust across hyperparameter choices, highlighting it as a practical and effective approach for efficient reasoning.
Similar Papers
Answer Convergence as a Signal for Early Stopping in Reasoning
Computation and Language
Makes smart computers think less, saving time and money.
Dynamic Early Exit in Reasoning Models
Computation and Language
Computers solve problems faster and better.
Compressing Chain-of-Thought in LLMs via Step Entropy
Artificial Intelligence
Makes AI think faster by cutting out extra words.