Tool-R1: Sample-Efficient Reinforcement Learning for Agentic Tool Use
By: Yabo Zhang , Yihan Zeng , Qingyun Li and more
Potential Business Impact:
Lets computers use tools to solve hard problems.
Large language models (LLMs) have demonstrated strong capabilities in language understanding and reasoning, yet they remain limited when tackling real-world tasks that require up-to-date knowledge, precise operations, or specialized tool use. To address this, we propose Tool-R1, a reinforcement learning framework that enables LLMs to perform general, compositional, and multi-step tool use by generating executable Python code. Tool-R1 supports integration of user-defined tools and standard libraries, with variable sharing across steps to construct coherent workflows. An outcome-based reward function, combining LLM-based answer judgment and code execution success, guides policy optimization. To improve training efficiency, we maintain a dynamic sample queue to cache and reuse high-quality trajectories, reducing the overhead of costly online sampling. Experiments on the GAIA benchmark show that Tool-R1 substantially improves both accuracy and robustness, achieving about 10\% gain over strong baselines, with larger improvements on complex multi-step tasks. These results highlight the potential of Tool-R1 for enabling reliable and efficient tool-augmented reasoning in real-world applications. Our code will be available at https://github.com/YBYBZhang/Tool-R1.
Similar Papers
Encouraging Good Processes Without the Need for Good Answers: Reinforcement Learning for LLM Agent Planning
Machine Learning (CS)
Teaches AI to plan better, making answers smarter.
One Model to Critique Them All: Rewarding Agentic Tool-Use via Efficient Reasoning
Artificial Intelligence
Helps AI use tools better and smarter.
ToolRL: Reward is All Tool Learning Needs
Machine Learning (CS)
Teaches AI to use new tools better.