Training One Model to Master Cross-Level Agentic Actions via Reinforcement Learning
By: Kaichen He , Zihao Wang , Muyao Li and more
Potential Business Impact:
AI learns to use different tools for tasks.
The paradigm of agentic AI is shifting from engineered complex workflows to post-training native models. However, existing agents are typically confined to static, predefined action spaces--such as exclusively using APIs, GUI events, or robotic commands. This rigidity limits their adaptability in dynamic environments where the optimal granularity of interaction varies contextually. To bridge this gap, we propose CrossAgent, a unified agentic model that masters heterogeneous action spaces and autonomously selects the most effective interface for each step of a trajectory. We introduce a comprehensive training pipeline that integrates cold-start supervised fine-tuning with a Multi-Turn Group Relative Policy Optimization (GRPO) algorithm. This approach enables the agent to learn adaptive action switching--balancing high-level efficiency with low-level precision--without human-specified rules. Extensive experiments on over 800 tasks in the open-world Minecraft environment demonstrate that CrossAgent achieves state-of-the-art performance. By dynamically leveraging the strengths of diverse action spaces, our model significantly outperforms fixed-action baselines, exhibiting superior generalization and efficiency in long-horizon reasoning. All code and models are available at https://github.com/CraftJarvis/OpenHA
Similar Papers
In-the-Flow Agentic System Optimization for Effective Planning and Tool Use
Artificial Intelligence
Helps AI agents learn to solve harder problems.
Multi-Agent Deep Research: Training Multi-Agent Systems with M-GRPO
Artificial Intelligence
Helps AI agents learn specialized jobs better.
Towards General Computer Control with Hierarchical Agents and Multi-Level Action Spaces
Artificial Intelligence
Lets computers control apps faster and on your device.