Score: 2

Agentic-R1: Distilled Dual-Strategy Reasoning

Published: July 8, 2025 | arXiv ID: 2507.05707v2

By: Weihua Du , Pranjal Aggarwal , Sean Welleck and more

Potential Business Impact:

Teaches AI to solve math and logic problems.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Current long chain-of-thought (long-CoT) models excel at mathematical reasoning but rely on slow and error-prone natural language traces. Tool-augmented agents address arithmetic via code execution, but often falter on complex logical tasks. We introduce a fine-tuning framework, DualDistill, that distills complementary reasoning strategies from multiple teachers into a unified student model. Using this approach, we train Agentic-R1, which dynamically selects the optimal strategy for each query, invoking tools for arithmetic and algorithmic problems, and using text-based reasoning for abstract ones. Our method improves accuracy across a range of tasks, including both computation-intensive and standard benchmarks, demonstrating the effectiveness of multi-strategy distillation in achieving robust and efficient reasoning. Our project is available at https://github.com/StigLidu/DualDistill

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
15 pages

Category
Computer Science:
Computation and Language