The Path of Self-Evolving Large Language Models: Achieving Data-Efficient Learning via Intrinsic Feedback
By: Hangfan Zhang , Siyuan Xu , Zhimeng Guo and more
Potential Business Impact:
AI learns better with less help.
Reinforcement learning (RL) has demonstrated potential in enhancing the reasoning capabilities of large language models (LLMs), but such training typically demands substantial efforts in creating and annotating data. In this work, we explore improving LLMs through RL with minimal data. Our approach alternates between the LLM proposing a task and then attempting to solve it. To minimize data dependency, we introduce two novel mechanisms grounded in self-awareness: (1) self-aware difficulty prediction, where the model learns to assess task difficulty relative to its own abilities and prioritize challenging yet solvable tasks, and (2) self-aware limit breaking, where the model recognizes when a task is beyond its capability boundary and proactively requests external data to break through that limit. Extensive experiments on nine benchmarks showing a 53.8% relative improvement with less than 1.2% extra data demonstrate the efficacy of self-aware RL and underscore the promise of self-evolving agent training.
Similar Papers
KnowRL: Teaching Language Models to Know What They Know
Computation and Language
AI learns when it's right or wrong.
RLSR: Reinforcement Learning from Self Reward
Machine Learning (CS)
AI learns to solve problems by checking its own work.
Language Self-Play For Data-Free Training
Artificial Intelligence
Computers learn to be smarter by playing games.