Accelerate Scaling of LLM Alignment via Quantifying the Coverage and Depth of Instruction Set
By: Chengwei Wu , Li Du , Hanyu Zhao and more
Potential Business Impact:
Makes AI smarter and learn faster.
With the growing demand for applying large language models to downstream tasks, improving model alignment performance and efficiency has become crucial. Such a process involves selecting informative instructions from a candidate pool. However, due to the complexity of instruction set distributions, the key factors driving the performance of aligned models remain unclear. As a result, current instruction set refinement methods fail to improve performance as the instruction pool expands continuously. To address this issue, we first investigate the key factors that influence the relationship between instruction dataset distribution and aligned model performance. Based on these insights, we propose a novel instruction data selection method. We identify that the depth of instructions and the coverage of the semantic space are the crucial factors determining downstream performance, which could explain over 70\% of the model loss on the development set. We then design an instruction selection algorithm to simultaneously maximize the depth and semantic coverage of the selected instructions. Experimental results demonstrate that, compared to state-of-the-art baseline methods, it can sustainably improve model performance at a faster pace and thus achieve \emph{``Accelerated Scaling''}.
Similar Papers
Scaling Towards the Information Boundary of Instruction Set: InfinityInstruct-Subject Technical Report
Artificial Intelligence
Teaches computers to follow harder instructions better.
Boosting Instruction Following at Scale
Artificial Intelligence
Makes AI follow instructions better, even many.
RAISE: Reinforced Adaptive Instruction Selection For Large Language Models
Computation and Language
Teaches AI better by picking the best lessons.