Importance-Aware Data Selection for Efficient LLM Instruction Tuning
By: Tingyu Jiang , Shen Li , Yiyao Song and more
Potential Business Impact:
Finds best lessons to teach AI faster.
Instruction tuning plays a critical role in enhancing the performance and efficiency of Large Language Models (LLMs). Its success depends not only on the quality of the instruction data but also on the inherent capabilities of the LLM itself. Some studies suggest that even a small amount of high-quality data can achieve instruction fine-tuning results that are on par with, or even exceed, those from using a full-scale dataset. However, rather than focusing solely on calculating data quality scores to evaluate instruction data, there is a growing need to select high-quality data that maximally enhances the performance of instruction tuning for a given LLM. In this paper, we propose the Model Instruction Weakness Value (MIWV) as a novel metric to quantify the importance of instruction data in enhancing model's capabilities. The MIWV metric is derived from the discrepancies in the model's responses when using In-Context Learning (ICL), helping identify the most beneficial data for enhancing instruction tuning performance. Our experimental results demonstrate that selecting only the top 1\% of data based on MIWV can outperform training on the full dataset. Furthermore, this approach extends beyond existing research that focuses on data quality scoring for data selection, offering strong empirical evidence supporting the effectiveness of our proposed method.
Similar Papers
Call for Rigor in Reporting Quality of Instruction Tuning Data
Computation and Language
Makes AI understand what you want better.
MLLM-Selector: Necessity and Diversity-driven High-Value Data Selection for Enhanced Visual Instruction Tuning
CV and Pattern Recognition
Finds best examples to teach AI to follow instructions.
Towards Alignment-Centric Paradigm: A Survey of Instruction Tuning in Large Language Models
Computation and Language
Teaches AI to follow instructions better.