Score: 0

Influence Functions for Efficient Data Selection in Reasoning

Published: October 7, 2025 | arXiv ID: 2510.06108v1

By: Prateek Humane , Paolo Cudrano , Daniel Z. Kaplan and more

Potential Business Impact:

Teaches computers to think better with fewer examples.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Fine-tuning large language models (LLMs) on chain-of-thought (CoT) data shows that a small amount of high-quality data can outperform massive datasets. Yet, what constitutes "quality" remains ill-defined. Existing reasoning methods rely on indirect heuristics such as problem difficulty or trace length, while instruction-tuning has explored a broader range of automated selection strategies, but rarely in the context of reasoning. We propose to define reasoning data quality using influence functions, which measure the causal effect of individual CoT examples on downstream accuracy, and introduce influence-based pruning, which consistently outperforms perplexity and embedding-based baselines on math reasoning within a model family.

Page Count
8 pages

Category
Computer Science:
Machine Learning (CS)