Effective Reinforcement Learning for Reasoning in Language Models
By: Lianghuan Huang , Shuo Li , Sagnik Anupam and more
Potential Business Impact:
Teaches computers to think better and faster.
Reinforcement learning (RL) has emerged as a promising strategy for improving the reasoning capabilities of language models (LMs) in domains such as mathematics and coding. However, most modern RL algorithms were designed to target robotics applications, which differ significantly from LM reasoning. We analyze RL algorithm design decisions for LM reasoning, for both accuracy and computational efficiency, focusing on relatively small models due to computational constraints. Our findings are: (i) on-policy RL significantly outperforms supervised fine-tuning (SFT), (ii) PPO-based off-policy updates increase accuracy instead of reduce variance, and (iii) removing KL divergence can lead to more concise generations and higher accuracy. Furthermore, we find that a key bottleneck to computational efficiency is that the optimal batch sizes for inference and backpropagation are different. We propose a novel algorithm, DASH, that performs preemptive sampling (i.e., sample a large batch and accumulate gradient updates in small increments), and gradient filtering (i.e., drop samples with small advantage estimates). We show that DASH reduces training time by 83% compared to a standard implementation of GRPO without sacrificing accuracy. Our findings provide valuable insights on designing effective RL algorithms for LM reasoning.
Similar Papers
Beyond Accuracy: Dissecting Mathematical Reasoning for LLMs Under Reinforcement Learning
Artificial Intelligence
Teaches computers to think better and use knowledge.
SuperRL: Reinforcement Learning with Supervision to Boost Language Model Reasoning
Artificial Intelligence
Teaches computers to learn better from examples.
Learning to Reason at the Frontier of Learnability
Machine Learning (CS)
Teaches AI to learn harder problems faster.