T-SHIRT: Token-Selective Hierarchical Data Selection for Instruction Tuning
By: Yanjun Fu, Faisal Hamman, Sanghamitra Dutta
Potential Business Impact:
Teaches computers to learn better from fewer examples.
Instruction tuning is essential for Large Language Models (LLMs) to effectively follow user instructions. To improve training efficiency and reduce data redundancy, recent works use LLM-based scoring functions, e.g., Instruction-Following Difficulty (IFD), to select high-quality instruction-tuning data with scores above a threshold. While these data selection methods often lead to models that can match or even exceed the performance of models trained on the full datasets, we identify two key limitations: (i) they assess quality at the sample level, ignoring token-level informativeness; and (ii) they overlook the robustness of the scoring method, often selecting a sample due to superficial lexical features instead of its true quality. In this work, we propose Token-Selective HIeRarchical Data Selection for Instruction Tuning (T-SHIRT), a novel data selection framework that introduces a new scoring method to include only informative tokens in quality evaluation and also promotes robust and reliable samples whose neighbors also show high quality with less local inconsistencies. We demonstrate that models instruction-tuned on a curated dataset (only 5% of the original size) using T-SHIRT can outperform those trained on the entire large-scale dataset by up to 5.48 points on average across eight benchmarks. Across various LLMs and training set scales, our method consistently surpasses existing state-of-the-art data selection techniques, while also remaining both cost-effective and highly efficient. For instance, by using GPT-2 for score computation, we are able to process a dataset of 52k samples using 40 minutes on a single GPU.
Similar Papers
RAISE: Reinforced Adaptive Instruction Selection For Large Language Models
Computation and Language
Teaches AI better by picking the best lessons.
Large-Scale Data Selection for Instruction Tuning
Computation and Language
Finds better training words for smarter AI.
Beyond Similarity: A Gradient-based Graph Method for Instruction Tuning Data Selection
Computation and Language
Teaches computers to learn better from less data.