Think in Blocks: Adaptive Reasoning from Direct Response to Deep Reasoning
By: Yekun Zhu, Guang Chen, Chengjun Mao
Potential Business Impact:
Lets AI think smarter, not harder.
Large Language Models (LLMs) with chains-of-thought have demonstrated strong performance on an increasing range of tasks, particularly those involving complex logical reasoning. However, excessively long chains can lead to overthinking, causing computational waste and slower responses. This raises a question: can LLMs dynamically adjust the length of their reasoning processes based on task complexity? To address this, we propose the Think in Blocks framework, which enables adaptive reasoning-from zero to deep reasoning-by partitioning the reasoning process into a tunable number of blocks. Our main contributions are: (1) Establishing an explicit block-structured paradigm in which the model first predicts an integer reasoning budget-the number of blocks-and then partitions its reasoning accordingly; (2) Training an adaptive model through a three-stage pipeline-Supervised Fine-Tuning, reward-guided Direct Preference Optimization, and Reinforcement Learning-that adjusts its reasoning depth to problem difficulty; (3) Exploiting the explicit block count to dynamically control reasoning depth at inference time, allowing flexible adjustment of chain-of-thought length during deployment.
Similar Papers
From Efficiency to Adaptivity: A Deeper Look at Adaptive Reasoning in Large Language Models
Artificial Intelligence
Computers change how they think based on how hard a problem is.
Adaptive Reasoning Executor: A Collaborative Agent System for Efficient Reasoning
Artificial Intelligence
Smarter AI answers questions faster, cheaper.
Algorithmic Thinking Theory
Artificial Intelligence
Makes AI smarter by letting it check its own answers.