AIR: Post-training Data Selection for Reasoning via Attention Head Influence
By: Jinrui Liu , Jeff Wu , Xuanguang Pan and more
Potential Business Impact:
Teaches AI to think better by picking important steps.
LLMs achieve remarkable multi-step reasoning capabilities, yet effectively transferring these skills via post-training distillation remains challenging. Existing data selection methods, ranging from manual curation to heuristics based on length, entropy, or overall loss, fail to capture the causal importance of individual reasoning steps, limiting distillation efficiency. To address this, we propose Attention Influence for Reasoning (AIR), a principled, unsupervised and training-free framework that leverages mechanistic insights of the retrieval head to select high-value post-training data. AIR first identifies reasoning-critical attention heads of an off-the-shelf model, then constructs a weakened reference model with disabled head influence, and finally quantifies the resulting loss divergence as the Attention Influence Score. This score enables fine-grained assessment at both the step and sample levels, supporting step-level weighted fine-tuning and global sample selection. Experiments across multiple reasoning benchmarks show that AIR consistently improves reasoning accuracy, surpassing heuristic baselines and effectively isolating the most critical steps and samples. Our work establishes a mechanism-driven, data-efficient approach for reasoning distillation in LLMs.
Similar Papers
AttentionInfluence: Adopting Attention Head Influence for Weak-to-Strong Pretraining Data Selection
Computation and Language
Helps computers learn to think better.
From Reasoning LLMs to BERT: A Two-Stage Distillation Framework for Search Relevance
Information Retrieval
Makes online shopping search faster and smarter.
Influence Functions for Efficient Data Selection in Reasoning
Machine Learning (CS)
Teaches computers to think better with fewer examples.