Score: 1

Scaling Environments for LLM Agents in the Era of Learning from Interaction: A Survey

Published: November 12, 2025 | arXiv ID: 2511.09586v1

By: Yuchen Huang , Sijia Li , Minghao Liu and more

Potential Business Impact:

Teaches AI to learn by doing, not just reading.

Business Areas:
Simulation Software

LLM-based agents can autonomously accomplish complex tasks across various domains. However, to further cultivate capabilities such as adaptive behavior and long-term decision-making, training on static datasets built from human-level knowledge is insufficient. These datasets are costly to construct and lack both dynamism and realism. A growing consensus is that agents should instead interact directly with environments and learn from experience through reinforcement learning. We formalize this iterative process as the Generation-Execution-Feedback (GEF) loop, where environments generate tasks to challenge agents, return observations in response to agents' actions during task execution, and provide evaluative feedback on rollouts for subsequent learning. Under this paradigm, environments function as indispensable producers of experiential data, highlighting the need to scale them toward greater complexity, realism, and interactivity. In this survey, we systematically review representative methods for environment scaling from a pioneering environment-centric perspective and organize them along the stages of the GEF loop, namely task generation, task execution, and feedback. We further analyze benchmarks, implementation strategies, and applications, consolidating fragmented advances and outlining future research directions for agent intelligence.

Country of Origin
🇭🇰 Hong Kong

Repos / Data Links

Page Count
20 pages

Category
Computer Science:
Machine Learning (CS)