Adaptive Tool Generation with Models as Tools and Reinforcement Learning
By: Chenpeng Wang , Xiaojie Cheng , Chunye Wang and more
Potential Business Impact:
Teaches AI to use tools without real-time internet.
Tool-augmented language models have demonstrated strong capabilities, but their reliance on live API access creates scalability and reliability challenges during training and deployment. We propose MTR, a simulation-first training framework for tool-augmented reasoning. Instead of relying on live APIs, MTR learns from complete ReAct traces with schema-validated, simulated observations. Our approach operates through a multi-agent architecture where a ToolMaker generates task-specific, OpenAI-compatible tool interfaces, an AutoAgent produces structured think-act-observe sequences, and a ToolActor simulates realistic responses. Training proceeds in two stages: Stage-1 Supervised Fine-Tuning (SFT) teaches 'trace grammar' from complete reasoning sequences; Stage-2 Group Relative Policy Optimization (GRPO) optimizes strategy with a composite trace reward that balances answer correctness and internal consistency. Across four multi-hop QA benchmarks (HotpotQA, MuSiQue, 2WikiMultiHopQA, Bamboogle), MTR attains competitive Exact Match (EM) scores to live-API systems and excels on reasoning-intensive tasks, suggesting that effective tool reasoning can be learned from structured traces without live interactions.
Similar Papers
Nemotron-Research-Tool-N1: Exploring Tool-Using Language Models with Reinforced Reasoning
Computation and Language
Teaches AI to use tools better than before.
ML-Tool-Bench: Tool-Augmented Planning for ML Tasks
Machine Learning (CS)
Helps AI agents plan complex data tasks better.
ReTool: Reinforcement Learning for Strategic Tool Use in LLMs
Computation and Language
Helps computers solve hard math problems better.