No More Stale Feedback: Co-Evolving Critics for Open-World Agent Learning
By: Zhicong Li , Lingjie Jiang , Yulan Hu and more
Potential Business Impact:
Teaches AI to learn better from feedback.
Critique-guided reinforcement learning (RL) has emerged as a powerful paradigm for training LLM agents by augmenting sparse outcome rewards with natural-language feedback. However, current methods often rely on static or offline critic models, which fail to adapt as the policy evolves. In on-policy RL, the agent's error patterns shift over time, causing stationary critics to become stale and providing feedback of diminishing utility. To address this, we introduce ECHO (Evolving Critic for Hindsight-Guided Optimization)}, a framework that jointly optimizes the policy and critic through a synchronized co-evolutionary loop. ECHO utilizes a cascaded rollout mechanism where the critic generates multiple diagnoses for an initial trajectory, followed by policy refinement to enable group-structured advantage estimation. We address the challenge of learning plateaus via a saturation-aware gain shaping objective, which rewards the critic for inducing incremental improvements in high-performing trajectories. By employing dual-track GRPO updates, ECHO ensures the critic's feedback stays synchronized with the evolving policy. Experimental results show that ECHO yields more stable training and higher long-horizon task success across open-world environments.
Similar Papers
Sample-Efficient Online Learning in LM Agents via Hindsight Trajectory Rewriting
Machine Learning (CS)
Teaches AI to learn better from mistakes.
What Makes Reasoning Invalid: Echo Reflection Mitigation for Large Language Models
Artificial Intelligence
Helps computers think deeper, not just repeat.
Wisdom of the Crowd: Reinforcement Learning from Coevolutionary Collective Feedback
Artificial Intelligence
Helps many AI models learn together to solve problems.