"Well, Keep Thinking": Enhancing LLM Reasoning with Adaptive Injection Decoding
By: Hyunbin Jin , Je Won Yeom , Seunghyun Bae and more
Potential Business Impact:
Makes computers think step-by-step without being told how.
Large language models (LLMs) exhibit strong reasoning abilities, often attributed to few-shot or zero-shot chain-of-thought (CoT) prompting. While effective, these methods require labor-intensive prompt engineering, raising the question of whether reasoning can be induced without reliance on explicit prompts. In this work, we unlock the reasoning capabilities of LLMs without explicit prompting. Inspired by zero-shot CoT and CoT-decoding, we propose a novel decoding strategy that systematically nudges LLMs to continue reasoning, thereby preventing immature reasoning processes. Specifically, we monitor the model's generation and inject a designated phrase whenever it is likely to conclude its response prematurely, before completing the reasoning process. Our experimental evaluations on diverse reasoning benchmarks demonstrate that our proposed strategy substantially improves LLM reasoning capabilities, highlighting the potential of decoding-based interventions as an alternative to traditional prompting techniques.
Similar Papers
Understanding LLM Scientific Reasoning through Promptings and Model's Explanation on the Answers
Artificial Intelligence
Makes AI better at solving hard science problems.
Innate Reasoning is Not Enough: In-Context Learning Enhances Reasoning Large Language Models with Less Overthinking
Artificial Intelligence
Makes smart computers think better, even when complex.
Reasoning Capabilities and Invariability of Large Language Models
Computation and Language
Tests if computers can think logically.