Reasoning Path Divergence: A New Metric and Curation Strategy to Unlock LLM Diverse Thinking
By: Feng Ju , Zeyu Qin , Rui Min and more
Potential Business Impact:
Teaches computers many ways to solve one problem.
While Test-Time Scaling (TTS) has proven effective in improving the reasoning ability of large language models (LLMs), low diversity in model outputs often becomes a bottleneck; this is partly caused by the common "one problem, one solution" (1P1S) training practice, which provides a single canonical answer and can push models toward a narrow set of reasoning paths. To address this, we propose a "one problem, multiple solutions" (1PNS) training paradigm that exposes the model to a variety of valid reasoning trajectories and thus increases inference diversity. A core challenge for 1PNS is reliably measuring semantic differences between multi-step chains of thought, so we introduce Reasoning Path Divergence (RPD), a step-level metric that aligns and scores Long Chain-of-Thought solutions to capture differences in intermediate reasoning. Using RPD, we curate maximally diverse solution sets per problem and fine-tune Qwen3-4B-Base. Experiments show that RPD-selected training yields more varied outputs and higher pass@k, with an average +2.80% gain in pass@16 over a strong 1P1S baseline and a +4.99% gain on AIME24, demonstrating that 1PNS further amplifies the effectiveness of TTS. Our code is available at https://github.com/fengjujf/Reasoning-Path-Divergence .
Similar Papers
Diversity-Aware Policy Optimization for Large Language Model Reasoning
Machine Learning (CS)
Makes AI better at solving math problems.
Enhancing Long Chain-of-Thought Reasoning through Multi-Path Plan Aggregation
Computation and Language
Helps AI think better by checking its plans.
Vision-R1: Incentivizing Reasoning Capability in Multimodal Large Language Models
CV and Pattern Recognition
Teaches computers to solve math problems better.