CircuitSeer: Mining High-Quality Data by Probing Mathematical Reasoning Circuits in LLMs
By: Shaobo Wang , Yongliang Miao , Yuancheng Liu and more
Potential Business Impact:
Finds smart ways to teach computers faster.
Large language models (LLMs) have demonstrated impressive reasoning capabilities, but scaling their performance often relies on massive reasoning datasets that are computationally expensive to train on. Existing data selection methods aim to curate smaller, high-quality subsets but often rely on costly external models or opaque heuristics. In this work, we shift the focus from external heuristics to the model's internal mechanisms. We find that complex reasoning tasks consistently activate a sparse, specialized subset of attention heads, forming core reasoning circuits. Building on this insight, we propose CircuitSeer, a novel data selection method that quantifies the reasoning complexity of data by measuring its influence on these crucial circuits. Extensive experiments on 4 models and 9 datasets demonstrate CircuitSeer's superiority. Notably, fine-tuning Qwen2.5-Math-7B on just 10% of data selected by our method achieves a 1.4-point gain in average Pass@1 over training on the full dataset, highlighting its efficiency and effectiveness.
Similar Papers
Evaluating Mathematical Reasoning Across Large Language Models: A Fine-Grained Approach
Machine Learning (CS)
Makes AI better at solving math problems.
ChipMind: Retrieval-Augmented Reasoning for Long-Context Circuit Design Specifications
Artificial Intelligence
Helps computers design computer chips faster.
Constructive Circuit Amplification: Improving Math Reasoning in LLMs via Targeted Sub-Network Updates
Computation and Language
Makes AI better at math without changing it much.