Score: 1

No More Stale Feedback: Co-Evolving Critics for Open-World Agent Learning

Published: January 11, 2026 | arXiv ID: 2601.06794v1

By: Zhicong Li , Lingjie Jiang , Yulan Hu and more

BigTech Affiliations: Alibaba

Potential Business Impact:

Teaches AI to learn better from feedback.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Critique-guided reinforcement learning (RL) has emerged as a powerful paradigm for training LLM agents by augmenting sparse outcome rewards with natural-language feedback. However, current methods often rely on static or offline critic models, which fail to adapt as the policy evolves. In on-policy RL, the agent's error patterns shift over time, causing stationary critics to become stale and providing feedback of diminishing utility. To address this, we introduce ECHO (Evolving Critic for Hindsight-Guided Optimization)}, a framework that jointly optimizes the policy and critic through a synchronized co-evolutionary loop. ECHO utilizes a cascaded rollout mechanism where the critic generates multiple diagnoses for an initial trajectory, followed by policy refinement to enable group-structured advantage estimation. We address the challenge of learning plateaus via a saturation-aware gain shaping objective, which rewards the critic for inducing incremental improvements in high-performing trajectories. By employing dual-track GRPO updates, ECHO ensures the critic's feedback stays synchronized with the evolving policy. Experimental results show that ECHO yields more stable training and higher long-horizon task success across open-world environments.

Country of Origin
🇨🇳 China

Page Count
22 pages

Category
Computer Science:
Artificial Intelligence