Agentic-R1: Distilled Dual-Strategy Reasoning
By: Weihua Du , Pranjal Aggarwal , Sean Welleck and more
Potential Business Impact:
Teaches AI to solve math and logic problems.
Current long chain-of-thought (long-CoT) models excel at mathematical reasoning but rely on slow and error-prone natural language traces. Tool-augmented agents address arithmetic via code execution, but often falter on complex logical tasks. We introduce a fine-tuning framework, DualDistill, that distills complementary reasoning strategies from multiple teachers into a unified student model. Using this approach, we train Agentic-R1, which dynamically selects the optimal strategy for each query, invoking tools for arithmetic and algorithmic problems, and using text-based reasoning for abstract ones. Our method improves accuracy across a range of tasks, including both computation-intensive and standard benchmarks, demonstrating the effectiveness of multi-strategy distillation in achieving robust and efficient reasoning. Our project is available at https://github.com/StigLidu/DualDistill
Similar Papers
Reducing Cognitive Load in Multi-Agent Reinforcement Learning for Mathematical Problem Solving: Decoupling Reasoning and Code Generation
Artificial Intelligence
Splits math problems between two AI helpers.
Marco-o1 v2: Towards Widening The Distillation Bottleneck for Reasoning Models
Machine Learning (CS)
Teaches small computers to think better, not overthink.
Deconstructing Long Chain-of-Thought: A Structured Reasoning Optimization Framework for Long CoT Distillation
Artificial Intelligence
Teaches computers to think better, step-by-step.