Score: 1

The Path of Self-Evolving Large Language Models: Achieving Data-Efficient Learning via Intrinsic Feedback

Published: October 3, 2025 | arXiv ID: 2510.02752v1

By: Hangfan Zhang , Siyuan Xu , Zhimeng Guo and more

Potential Business Impact:

AI learns better with less help.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Reinforcement learning (RL) has demonstrated potential in enhancing the reasoning capabilities of large language models (LLMs), but such training typically demands substantial efforts in creating and annotating data. In this work, we explore improving LLMs through RL with minimal data. Our approach alternates between the LLM proposing a task and then attempting to solve it. To minimize data dependency, we introduce two novel mechanisms grounded in self-awareness: (1) self-aware difficulty prediction, where the model learns to assess task difficulty relative to its own abilities and prioritize challenging yet solvable tasks, and (2) self-aware limit breaking, where the model recognizes when a task is beyond its capability boundary and proactively requests external data to break through that limit. Extensive experiments on nine benchmarks showing a 53.8% relative improvement with less than 1.2% extra data demonstrate the efficacy of self-aware RL and underscore the promise of self-evolving agent training.

Country of Origin
πŸ‡ΈπŸ‡¬ πŸ‡ΊπŸ‡Έ Singapore, United States

Page Count
17 pages

Category
Computer Science:
Computation and Language