Efficient Data Selection at Scale via Influence Distillation
By: Mahdi Nikdan , Vincent Cohen-Addad , Dan Alistarh and more
Potential Business Impact:
Makes AI learn better and faster.
Effective data selection is critical for efficient training of modern Large Language Models (LLMs). This paper introduces Influence Distillation, a novel, mathematically-justified framework for data selection that employs second-order information to optimally weight training samples. By distilling each sample's influence on a target distribution, our method assigns model-specific weights that are used to select training data for LLM fine-tuning, guiding it toward strong performance on the target domain. We derive these optimal weights for both Gradient Descent and Adam optimizers. To ensure scalability and reduce computational cost, we propose a $\textit{landmark-based approximation}$: influence is precisely computed for a small subset of "landmark" samples and then efficiently propagated to all other samples to determine their weights. We validate Influence Distillation by applying it to instruction tuning on the Tulu V2 dataset, targeting a range of tasks including GSM8k, SQuAD, and MMLU, across several models from the Llama and Qwen families. Experiments show that Influence Distillation matches or outperforms state-of-the-art performance while achieving up to $3.5\times$ faster selection.
Similar Papers
Improving Influence-based Instruction Tuning Data Selection for Balanced Learning of Diverse Capabilities
Computation and Language
Helps AI learn many different things well.
Not All Instances Are Equally Valuable: Towards Influence-Weighted Dataset Distillation
Machine Learning (CS)
Makes computer learning better by picking good data.
Transferable text data distillation by trajectory matching
Computation and Language
Makes big computer brains learn with less information.