Data Mixing Optimization for Supervised Fine-Tuning of Large Language Models
By: Yuan Li, Zhengzhong Liu, Eric Xing
Potential Business Impact:
Makes AI learn better from mixed information.
Optimizing data mixtures for supervised fine-tuning (SFT) of large language models (LLMs) is critical for developing general-purpose models, yet this area remains underexplored. In this paper, we frame data mixing as an optimization problem and introduce a novel method designed to minimize validation loss. Our approach parametrizes the loss by modeling effective data transferred and leveraging scaling laws for fine-tuning. By experimenting with various small-scale data mixtures, we fit these parameters and derive the optimal weights. We provide both mathematical proofs and empirical results demonstrating that our algorithm achieves excellent overall and individual performance across all domains. Through controlled experiments, we show that models trained with our optimized weights perform on par with those using optimal weights determined via grid search, with per-domain loss only 0.66% higher than the best domain loss from grid search on average. Additionally, we show that reweighting popular SFT datasets using our method improves both validation loss and downstream performance. Finally, we discuss how our method can generalize to guide data selection for domain-specific models and provide insights into SFT.
Similar Papers
Massive Supervised Fine-tuning Experiments Reveal How Data, Layer, and Training Factors Shape LLM Alignment Quality
Computation and Language
Makes AI better at following instructions.
Improved Supervised Fine-Tuning for Large Language Models to Mitigate Catastrophic Forgetting
Computation and Language
Keeps AI smart while teaching it new tricks.
WeFT: Weighted Entropy-driven Fine-Tuning for dLLMs
Computation and Language
Makes AI better at solving puzzles and math.