EnvScaler: Scaling Tool-Interactive Environments for LLM Agent via Programmatic Synthesis
By: Xiaoshuai Song , Haofei Chang , Guanting Dong and more
Potential Business Impact:
Teaches AI to use tools in many situations.
Large language models (LLMs) are expected to be trained to act as agents in various real-world environments, but this process relies on rich and varied tool-interaction sandboxes. However, access to real systems is often restricted; LLM-simulated environments are prone to hallucinations and inconsistencies; and manually built sandboxes are hard to scale. In this paper, we propose EnvScaler, an automated framework for scalable tool-interaction environments via programmatic synthesis. EnvScaler comprises two components. First, SkelBuilder constructs diverse environment skeletons through topic mining, logic modeling, and quality evaluation. Then, ScenGenerator generates multiple task scenarios and rule-based trajectory validation functions for each environment. With EnvScaler, we synthesize 191 environments and about 7K scenarios, and apply them to Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL) for Qwen3 series models. Results on three benchmarks show that EnvScaler significantly improves LLMs' ability to solve tasks in complex environments involving multi-turn, multi-tool interactions. We release our code and data at https://github.com/RUC-NLPIR/EnvScaler.
Similar Papers
SCALER:Synthetic Scalable Adaptive Learning Environment for Reasoning
Artificial Intelligence
Teaches computers to solve harder problems better.
Scaling Environments for LLM Agents in the Era of Learning from Interaction: A Survey
Machine Learning (CS)
Teaches AI to learn by doing, not just reading.
GenEnv: Difficulty-Aligned Co-Evolution Between LLM Agents and Environment Simulators
Computation and Language
Teaches AI new skills by playing games.