Score: 0

Structured Pruning for Diverse Best-of-N Reasoning Optimization

Published: June 4, 2025 | arXiv ID: 2506.03978v2

By: Hieu Trung Nguyen, Bao Nguyen, Viet Anh Nguyen

Potential Business Impact:

Makes AI better at solving math problems.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Model pruning in transformer-based language models, traditionally viewed as a means of achieving computational savings, can enhance the model's reasoning capabilities. In this work, we uncover a surprising phenomenon: the selective pruning of certain attention heads leads to improvements in reasoning performance, particularly on challenging tasks. Motivated by this observation, we propose SPRINT, a novel contrastive learning framework that dynamically selects the optimal head and layer to prune during inference. By aligning question embeddings with head embeddings, SPRINT identifies those pruned-head configurations that result in more accurate reasoning. Extensive experiments demonstrate that our method significantly outperforms traditional best-of-$N$ and random head selection strategies on the MATH500 and GSM8K datasets.

Page Count
12 pages

Category
Computer Science:
Computation and Language