Score: 1

AIR: Post-training Data Selection for Reasoning via Attention Head Influence

Published: December 15, 2025 | arXiv ID: 2512.13279v1

By: Jinrui Liu , Jeff Wu , Xuanguang Pan and more

Potential Business Impact:

Teaches AI to think better by picking important steps.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

LLMs achieve remarkable multi-step reasoning capabilities, yet effectively transferring these skills via post-training distillation remains challenging. Existing data selection methods, ranging from manual curation to heuristics based on length, entropy, or overall loss, fail to capture the causal importance of individual reasoning steps, limiting distillation efficiency. To address this, we propose Attention Influence for Reasoning (AIR), a principled, unsupervised and training-free framework that leverages mechanistic insights of the retrieval head to select high-value post-training data. AIR first identifies reasoning-critical attention heads of an off-the-shelf model, then constructs a weakened reference model with disabled head influence, and finally quantifies the resulting loss divergence as the Attention Influence Score. This score enables fine-grained assessment at both the step and sample levels, supporting step-level weighted fine-tuning and global sample selection. Experiments across multiple reasoning benchmarks show that AIR consistently improves reasoning accuracy, surpassing heuristic baselines and effectively isolating the most critical steps and samples. Our work establishes a mechanism-driven, data-efficient approach for reasoning distillation in LLMs.

Repos / Data Links

Page Count
19 pages

Category
Computer Science:
Computation and Language