Score: 3

Process-Supervised Reinforcement Learning for Interactive Multimodal Tool-Use Agents

Published: September 17, 2025 | arXiv ID: 2509.14480v1

By: Weiting Tan , Xinghua Qu , Ming Tu and more

BigTech Affiliations: ByteDance Johns Hopkins University

Potential Business Impact:

Teaches computers to use tools with voice commands.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Effective interactive tool use requires agents to master Tool Integrated Reasoning (TIR): a complex process involving multi-turn planning and long-context dialogue management. To train agents for this dynamic process, particularly in multi-modal contexts, we introduce a sandbox environment for reinforcement learning (RL) that supports interleaved speech-text rollouts. Our core strategy, Turn-level Adjudicated Reinforcement Learning (TARL), addresses the challenge of credit assignment in long-horizon tasks by employing a Large Language Model (LLM) as a judge to provide turn-level evaluation. To enhance exploration, we integrate a mixed-task training curriculum with mathematical reasoning problems. This unified approach boosts the task pass rate on the text-based $\tau$-bench by over 6% compared to strong RL baselines. Crucially, we demonstrate our framework's suitability for fine-tuning a multi-modal foundation model for agentic tasks. By training a base multi-modal LLM on interleaved speech-text rollouts, we equip it with tool-use abilities, paving the way for more natural, voice-driven interactive agents.

Country of Origin
πŸ‡ΊπŸ‡Έ πŸ‡¨πŸ‡³ United States, China

Repos / Data Links

Page Count
20 pages

Category
Computer Science:
Computation and Language