Score: 0

AutoForge: Automated Environment Synthesis for Agentic Reinforcement Learning

Published: December 28, 2025 | arXiv ID: 2512.22857v1

By: Shihao Cai , Runnan Fang , Jialong Wu and more

Potential Business Impact:

Teaches AI to learn hard tasks in fake worlds.

Business Areas:
Simulation Software

Conducting reinforcement learning (RL) in simulated environments offers a cost-effective and highly scalable way to enhance language-based agents. However, previous work has been limited to semi-automated environment synthesis or tasks lacking sufficient difficulty, offering little breadth or depth. In addition, the instability of simulated users integrated into these environments, along with the heterogeneity across simulated environments, poses further challenges for agentic RL. In this work, we propose: (1) a unified pipeline for automated and scalable synthesis of simulated environments associated with high-difficulty but easily verifiable tasks; and (2) an environment level RL algorithm that not only effectively mitigates user instability but also performs advantage estimation at the environment level, thereby improving training efficiency and stability. Comprehensive evaluations on agentic benchmarks, including tau-bench, tau2-Bench, and VitaBench, validate the effectiveness of our proposed method. Further in-depth analyses underscore its out-of-domain generalization.

Page Count
12 pages

Category
Computer Science:
Computation and Language