LANPO: Bootstrapping Language and Numerical Feedback for Reinforcement Learning in LLMs
By: Ang Li , Yifei Wang , Zhihang Yuan and more
Potential Business Impact:
Helps AI learn math faster from past mistakes.
Reinforcement learning in large language models (LLMs) often relies on scalar rewards, a practice that discards valuable textual rationale buried in the rollouts, forcing the model to explore \textit{de novo} with each attempt and hindering sample efficiency. While LLMs can uniquely learn from language feedback provided in-context, naively integrating on-line experiences into RL training presents a paradox: feedback from the same problem risks information leakage and memorization, while feedback from different problems often leads to behavior collapse due to irrelevant context. To resolve this tension, we propose \textbf{Language-And-Numerical Policy Optimization (LANPO)}, a framework that cleanly separates the roles of feedback: language guides exploration, while numerical rewards drive optimization. LANPO builds a dynamic experience pool from past trials and introduces two principles to ensure feedback is effective: \emph{Reward-Agnostic Reflection} for safe intra-sample self-correction and \emph{Relevant Abstraction} to distill generalizable lessons from inter-sample experiences. Across mathematical reasoning benchmarks, LANPO enables 7B and 14B models to significantly outperform strong baselines trained with GRPO in test accuracy. Our work provides a robust method for integrating historical experiences into the LLM RL loop, creating more effective and data-efficient learning agents.
Similar Papers
Prompted Policy Search: Reinforcement Learning through Linguistic and Numerical Reasoning in LLMs
Machine Learning (CS)
Teaches robots to learn faster with words.
Critique-GRPO: Advancing LLM Reasoning with Natural Language and Numerical Feedback
Computation and Language
Helps computers learn better from mistakes and feedback.
Bootstrapping LLMs via Preference-Based Policy Optimization
Artificial Intelligence
Teaches AI to follow human wishes better.