Thinking Long, but Short: Stable Sequential Test-Time Scaling for Large Reasoning Models
By: Michael R. Metel , Yufei Cui , Boxing Chen and more
Sequential test-time scaling is a promising training-free method to improve large reasoning model accuracy, but as currently implemented, significant limitations have been observed. Inducing models to think for longer can increase their accuracy, but as the length of reasoning is further extended, it has also been shown to result in accuracy degradation and model instability. This work presents a novel sequential test-time scaling method, Min-Seek, which improves model accuracy significantly over a wide range of induced thoughts, stabilizing the accuracy of sequential scaling, and removing the need for reasoning length fine-tuning. Beyond improving model accuracy over a variety of reasoning tasks, our method is inherently efficient, as only the KV pairs of one additional induced thought are kept in the KV cache during reasoning. With a custom KV cache which stores keys without position embeddings, by dynamically encoding them contiguously before each new generated thought, our method can continue to reason well beyond a model's maximum context length, and under mild conditions has linear computational complexity.
Similar Papers
Towards Thinking-Optimal Scaling of Test-Time Compute for LLM Reasoning
Computation and Language
Makes AI better at math by thinking just enough.
Think Twice: Enhancing LLM Reasoning by Scaling Multi-round Test-time Thinking
Computation and Language
Makes AI smarter by letting it think more.
Towards Thinking-Optimal Scaling of Test-Time Compute for LLM Reasoning
Computation and Language
Makes AI smarter by teaching it when to think less.