Fast on the Easy, Deep on the Hard: Efficient Reasoning via Powered Length Penalty
By: Zehui Ling , Deshu Chen , Hongwei Zhang and more
Potential Business Impact:
Makes AI think faster and smarter on tests.
Large language models (LLMs) have demonstrated significant advancements in reasoning capabilities, performing well on various challenging benchmarks. Techniques like Chain-of-Thought prompting have been introduced to further improve reasoning. However, these approaches frequently generate longer outputs, which in turn increase computational latency. Although some methods use reinforcement learning to shorten reasoning, they often apply uniform penalties without considering the problem's complexity, leading to suboptimal outcomes. In this study, we seek to enhance the efficiency of LLM reasoning by promoting conciseness for simpler problems while preserving sufficient reasoning for more complex ones for accuracy, thus improving the model's overall performance. Specifically, we manage the model's reasoning efficiency by dividing the reward function and including a novel penalty for output length. Our approach has yielded impressive outcomes in benchmark evaluations across three datasets: GSM8K, MATH500, and AIME2024. For the comparatively simpler datasets GSM8K and MATH500, our method has effectively shortened output lengths while preserving or enhancing accuracy. On the more demanding AIME2024 dataset, our approach has resulted in improved accuracy.
Similar Papers
Efficient RL Training for Reasoning Models via Length-Aware Optimization
Artificial Intelligence
Makes smart computers answer faster, using less effort.
Just Enough Thinking: Efficient Reasoning with Adaptive Length Penalties Reinforcement Learning
Artificial Intelligence
Saves computer power by skipping easy problems.
Stop Overthinking: A Survey on Efficient Reasoning for Large Language Models
Computation and Language
Makes smart computer programs think faster, not waste words.