Scaling Environments for LLM Agents in the Era of Learning from Interaction: A Survey
By: Yuchen Huang , Sijia Li , Minghao Liu and more
Potential Business Impact:
Teaches AI to learn by doing, not just reading.
LLM-based agents can autonomously accomplish complex tasks across various domains. However, to further cultivate capabilities such as adaptive behavior and long-term decision-making, training on static datasets built from human-level knowledge is insufficient. These datasets are costly to construct and lack both dynamism and realism. A growing consensus is that agents should instead interact directly with environments and learn from experience through reinforcement learning. We formalize this iterative process as the Generation-Execution-Feedback (GEF) loop, where environments generate tasks to challenge agents, return observations in response to agents' actions during task execution, and provide evaluative feedback on rollouts for subsequent learning. Under this paradigm, environments function as indispensable producers of experiential data, highlighting the need to scale them toward greater complexity, realism, and interactivity. In this survey, we systematically review representative methods for environment scaling from a pioneering environment-centric perspective and organize them along the stages of the GEF loop, namely task generation, task execution, and feedback. We further analyze benchmarks, implementation strategies, and applications, consolidating fragmented advances and outlining future research directions for agent intelligence.
Similar Papers
Scaling Environments for Organoid Intelligence with LLM-Automated Design and Plasticity-Based Evaluation
Neural and Evolutionary Computing
Teaches brain cells to play games.
Towards General Agentic Intelligence via Environment Scaling
Computation and Language
Teaches AI to use tools better.
Towards Agentic Self-Learning LLMs in Search Environment
Artificial Intelligence
Teaches computers to learn and improve tasks alone.