Score: 1

Mid-Think: Training-Free Intermediate-Budget Reasoning via Token-Level Triggers

Published: January 11, 2026 | arXiv ID: 2601.07036v1

By: Wang Yang , Debargha Ganguly , Xinpeng Li and more

Potential Business Impact:

Makes AI think smarter, faster, and cheaper.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Hybrid reasoning language models are commonly controlled through high-level Think/No-think instructions to regulate reasoning behavior, yet we found that such mode switching is largely driven by a small set of trigger tokens rather than the instructions themselves. Through attention analysis and controlled prompting experiments, we show that a leading ``Okay'' token induces reasoning behavior, while the newline pattern following ``</think>'' suppresses it. Based on this observation, we propose Mid-Think, a simple training-free prompting format that combines these triggers to achieve intermediate-budget reasoning, consistently outperforming fixed-token and prompt-based baselines in terms of the accuracy-length trade-off. Furthermore, applying Mid-Think to RL training after SFT reduces training time by approximately 15% while improving final performance of Qwen3-8B on AIME from 69.8% to 72.4% and on GPQA from 58.5% to 61.1%, demonstrating its effectiveness for both inference-time control and RL-based reasoning training.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
12 pages

Category
Computer Science:
Computation and Language