Score: 2

Efficient Data Selection at Scale via Influence Distillation

Published: May 25, 2025 | arXiv ID: 2505.19051v1

By: Mahdi Nikdan , Vincent Cohen-Addad , Dan Alistarh and more

BigTech Affiliations: Google

Potential Business Impact:

Makes AI learn better and faster.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Effective data selection is critical for efficient training of modern Large Language Models (LLMs). This paper introduces Influence Distillation, a novel, mathematically-justified framework for data selection that employs second-order information to optimally weight training samples. By distilling each sample's influence on a target distribution, our method assigns model-specific weights that are used to select training data for LLM fine-tuning, guiding it toward strong performance on the target domain. We derive these optimal weights for both Gradient Descent and Adam optimizers. To ensure scalability and reduce computational cost, we propose a $\textit{landmark-based approximation}$: influence is precisely computed for a small subset of "landmark" samples and then efficiently propagated to all other samples to determine their weights. We validate Influence Distillation by applying it to instruction tuning on the Tulu V2 dataset, targeting a range of tasks including GSM8k, SQuAD, and MMLU, across several models from the Llama and Qwen families. Experiments show that Influence Distillation matches or outperforms state-of-the-art performance while achieving up to $3.5\times$ faster selection.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
27 pages

Category
Computer Science:
Computation and Language