DeepPrune: Parallel Scaling without Inter-trace Redundancy
By: Shangqing Tu , Yaxuan Li , Yushi Bai and more
Potential Business Impact:
Makes smart computer thinking faster and cheaper.
Parallel scaling has emerged as a powerful paradigm to enhance reasoning capabilities in large language models (LLMs) by generating multiple Chain-of-Thought (CoT) traces simultaneously. However, this approach introduces significant computational inefficiency due to inter-trace redundancy -- our analysis reveals that over 80% of parallel reasoning traces yield identical final answers, representing substantial wasted computation. To address this critical efficiency bottleneck, we propose DeepPrune, a novel framework that enables efficient parallel scaling through dynamic pruning. Our method features a specialized judge model trained with focal loss and oversampling techniques to accurately predict answer equivalence from partial reasoning traces which realizes 0.87 AUROC on equivalence prediction, combined with an online greedy clustering algorithm that dynamically prunes redundant paths while preserving answer diversity. Comprehensive evaluations across three challenging benchmarks (AIME 2024, AIME 2025, and GPQA) and multiple reasoning models demonstrate that DeepPrune achieves remarkable token reduction by over 80% compared to conventional consensus sampling on most cases, while maintaining competitive accuracy within 3 percentage points. Our work establishes a new standard for efficient parallel reasoning, making high-performance reasoning more efficient. Our code and data are here: https://deepprune.github.io/
Similar Papers
Think, Prune, Train, Improve: Scaling Reasoning without Scaling Models
Machine Learning (CS)
Computers learn to solve harder math problems.
Concise Reasoning, Big Gains: Pruning Long Reasoning Trace with Difficulty-Aware Prompting
Artificial Intelligence
Makes AI think faster and cheaper.
Learning Adaptive Parallel Reasoning with Language Models
Artificial Intelligence
Lets computers think smarter, faster, and more accurately.