Influence Functions for Efficient Data Selection in Reasoning
By: Prateek Humane , Paolo Cudrano , Daniel Z. Kaplan and more
Potential Business Impact:
Teaches computers to think better with fewer examples.
Fine-tuning large language models (LLMs) on chain-of-thought (CoT) data shows that a small amount of high-quality data can outperform massive datasets. Yet, what constitutes "quality" remains ill-defined. Existing reasoning methods rely on indirect heuristics such as problem difficulty or trace length, while instruction-tuning has explored a broader range of automated selection strategies, but rarely in the context of reasoning. We propose to define reasoning data quality using influence functions, which measure the causal effect of individual CoT examples on downstream accuracy, and introduce influence-based pruning, which consistently outperforms perplexity and embedding-based baselines on math reasoning within a model family.
Similar Papers
When Thinking Fails: The Pitfalls of Reasoning for Instruction-Following in LLMs
Computation and Language
Makes AI follow instructions better by fixing reasoning.
Expanding Reasoning Potential in Foundation Model by Learning Diverse Chains of Thought Patterns
Artificial Intelligence
Teaches computers to solve math problems better.
Scaling Reasoning can Improve Factuality in Large Language Models
Computation and Language
Makes computers answer questions more accurately.