Score: 0

LANPO: Bootstrapping Language and Numerical Feedback for Reinforcement Learning in LLMs

Published: October 18, 2025 | arXiv ID: 2510.16552v1

By: Ang Li , Yifei Wang , Zhihang Yuan and more

Potential Business Impact:

Helps AI learn math faster from past mistakes.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Reinforcement learning in large language models (LLMs) often relies on scalar rewards, a practice that discards valuable textual rationale buried in the rollouts, forcing the model to explore \textit{de novo} with each attempt and hindering sample efficiency. While LLMs can uniquely learn from language feedback provided in-context, naively integrating on-line experiences into RL training presents a paradox: feedback from the same problem risks information leakage and memorization, while feedback from different problems often leads to behavior collapse due to irrelevant context. To resolve this tension, we propose \textbf{Language-And-Numerical Policy Optimization (LANPO)}, a framework that cleanly separates the roles of feedback: language guides exploration, while numerical rewards drive optimization. LANPO builds a dynamic experience pool from past trials and introduces two principles to ensure feedback is effective: \emph{Reward-Agnostic Reflection} for safe intra-sample self-correction and \emph{Relevant Abstraction} to distill generalizable lessons from inter-sample experiences. Across mathematical reasoning benchmarks, LANPO enables 7B and 14B models to significantly outperform strong baselines trained with GRPO in test accuracy. Our work provides a robust method for integrating historical experiences into the LLM RL loop, creating more effective and data-efficient learning agents.

Country of Origin
🇨🇳 China

Page Count
26 pages

Category
Computer Science:
Machine Learning (CS)