The Agent's First Day: Benchmarking Learning, Exploration, and Scheduling in the Workplace Scenarios
By: Daocheng Fu , Jianbiao Mei , Rong Wu and more
The rapid evolution of Multi-modal Large Language Models (MLLMs) has advanced workflow automation; however, existing research mainly targets performance upper bounds in static environments, overlooking robustness for stochastic real-world deployment. We identify three key challenges: dynamic task scheduling, active exploration under uncertainty, and continuous learning from experience. To bridge this gap, we introduce \method{}, a dynamic evaluation environment that simulates a "trainee" agent continuously exploring a novel setting. Unlike traditional benchmarks, \method{} evaluates agents along three dimensions: (1) context-aware scheduling for streaming tasks with varying priorities; (2) prudent information acquisition to reduce hallucination via active exploration; and (3) continuous evolution by distilling generalized strategies from rule-based, dynamically generated tasks. Experiments show that cutting-edge agents have significant deficiencies in dynamic environments, especially in active exploration and continual learning. Our work establishes a framework for assessing agent reliability, shifting evaluation from static tests to realistic, production-oriented scenarios. Our codes are available at https://github.com/KnowledgeXLab/EvoEnv
Similar Papers
Continuous Benchmark Generation for Evaluating Enterprise-scale LLM Agents
Software Engineering
Creates better tests for smart computer helpers.
EvoAgent: Self-evolving Agent with Continual World Model for Long-Horizon Tasks
Robotics
Agent learns to do hard tasks alone.
Benchmarking In-context Experiential Learning Through Repeated Product Recommendations
Machine Learning (CS)
Teaches AI to learn from mistakes in real-time.