ToolACE-MT: Non-Autoregressive Generation for Agentic Multi-Turn Interaction
By: Xingshan Zeng , Weiwen Liu , Lingzhi Wang and more
Potential Business Impact:
Makes AI assistants better at helping people.
Agentic task-solving with Large Language Models (LLMs) requires multi-turn, multi-step interactions, often involving complex function calls and dynamic user-agent exchanges. Existing simulation-based data generation methods for such scenarios rely heavily on costly autoregressive interactions between multiple LLM agents, thereby limiting real-world performance of agentic tasks. In this paper, we propose a novel Non-Autoregressive Iterative Generation framework, called ToolACE-MT, for constructing high-quality multi-turn agentic dialogues. ToolACE-MT generates full conversational trajectories through three stages: coarse-grained initialization, iterative refinement, and offline verification. The initialization phase builds a structurally complete yet semantically coarse dialogue skeleton; the iterative refinement phase introduces realistic complexities and continued refinement via mask-and-fill operations; and the offline verification phase ensures correctness and coherence via rule- and model-based checks. Experiments demonstrate that ToolACE-MT enables efficient, effective and generalizable agentic data generation, offering a new paradigm for high-quality data construction in tool-augmented LLM scenarios.
Similar Papers
ToolACE-R: Model-aware Iterative Training and Adaptive Refinement for Tool Learning
Computation and Language
Teaches computers to use tools better.
Adaptive Tool Generation with Models as Tools and Reinforcement Learning
Computation and Language
Teaches AI to use tools without real-time internet.
AutoTool: Dynamic Tool Selection and Integration for Agentic Reasoning
Computation and Language
Lets AI learn to pick the right tools.