Score: 1

Tool-R1: Sample-Efficient Reinforcement Learning for Agentic Tool Use

Published: September 16, 2025 | arXiv ID: 2509.12867v1

By: Yabo Zhang , Yihan Zeng , Qingyun Li and more

Potential Business Impact:

Lets computers use tools to solve hard problems.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Large language models (LLMs) have demonstrated strong capabilities in language understanding and reasoning, yet they remain limited when tackling real-world tasks that require up-to-date knowledge, precise operations, or specialized tool use. To address this, we propose Tool-R1, a reinforcement learning framework that enables LLMs to perform general, compositional, and multi-step tool use by generating executable Python code. Tool-R1 supports integration of user-defined tools and standard libraries, with variable sharing across steps to construct coherent workflows. An outcome-based reward function, combining LLM-based answer judgment and code execution success, guides policy optimization. To improve training efficiency, we maintain a dynamic sample queue to cache and reuse high-quality trajectories, reducing the overhead of costly online sampling. Experiments on the GAIA benchmark show that Tool-R1 substantially improves both accuracy and robustness, achieving about 10\% gain over strong baselines, with larger improvements on complex multi-step tasks. These results highlight the potential of Tool-R1 for enabling reliable and efficient tool-augmented reasoning in real-world applications. Our code will be available at https://github.com/YBYBZhang/Tool-R1.

Repos / Data Links

Page Count
15 pages

Category
Computer Science:
Machine Learning (CS)