MatchTIR: Fine-Grained Supervision for Tool-Integrated Reasoning via Bipartite Matching
By: Changle Qu , Sunhao Dai , Hengyi Cai and more
Tool-Integrated Reasoning (TIR) empowers large language models (LLMs) to tackle complex tasks by interleaving reasoning steps with external tool interactions. However, existing reinforcement learning methods typically rely on outcome- or trajectory-level rewards, assigning uniform advantages to all steps within a trajectory. This coarse-grained credit assignment fails to distinguish effective tool calls from redundant or erroneous ones, particularly in long-horizon multi-turn scenarios. To address this, we propose MatchTIR, a framework that introduces fine-grained supervision via bipartite matching-based turn-level reward assignment and dual-level advantage estimation. Specifically, we formulate credit assignment as a bipartite matching problem between predicted and ground-truth traces, utilizing two assignment strategies to derive dense turn-level rewards. Furthermore, to balance local step precision with global task success, we introduce a dual-level advantage estimation scheme that integrates turn-level and trajectory-level signals, assigning distinct advantage values to individual interaction turns. Extensive experiments on three benchmarks demonstrate the superiority of MatchTIR. Notably, our 4B model surpasses the majority of 8B competitors, particularly in long-horizon and multi-turn tasks. Our codes are available at https://github.com/quchangle1/MatchTIR.
Similar Papers
SimpleTIR: End-to-End Reinforcement Learning for Multi-Turn Tool-Integrated Reasoning
Machine Learning (CS)
Makes AI better at solving hard math problems.
CriticSearch: Fine-Grained Credit Assignment for Search Agents via a Retrospective Critic
Computation and Language
Helps AI learn to answer questions better.
AutoTIR: Autonomous Tools Integrated Reasoning via Reinforcement Learning
Computation and Language
Lets AI learn to pick the best tool.