Score: 1

ToolACE-R: Model-aware Iterative Training and Adaptive Refinement for Tool Learning

Published: April 2, 2025 | arXiv ID: 2504.01400v2

By: Xingshan Zeng , Weiwen Liu , Xu Huang and more

BigTech Affiliations: Huawei

Potential Business Impact:

Teaches computers to use tools better.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Tool learning, which allows Large Language Models (LLMs) to leverage external tools for solving complex user tasks, has emerged as a promising avenue for extending model capabilities. However, existing approaches primarily focus on data synthesis for fine-tuning LLMs to invoke tools effectively, largely ignoring how to fully stimulate the potential of the model. In this paper, we propose ToolACE-R, a novel framework that includes both model-aware iterative training and adaptive refinement for tool learning. ToolACE-R features a model-aware iterative training procedure that progressively adjust training samples based on the model's evolving capabilities to maximize its potential. Additionally, it incorporates self-refinement training corpus which emphasizes LLM's ability to iteratively refine their tool calls, optimizing performance without requiring external feedback. Furthermore, we introduce adaptive self-refinement mechanism for efficient test-time scaling, where the trained model can autonomously determine when to stop the process based on iterative self-refinement. We conduct extensive experiments across several benchmark datasets, showing that ToolACE-R achieves competitive performance compared to advanced API-based models. The performance of tool invocation can be further improved efficiently through adaptive self-refinement. These results highlight the effectiveness and generalizability of ToolACE-R, offering a promising direction for more efficient and scalable tool learning.

Country of Origin
🇨🇳 China

Page Count
16 pages

Category
Computer Science:
Computation and Language