Progressive Depth Up-scaling via Optimal Transport
By: Mingzi Cao, Xi Wang, Nikolaos Aletras
Potential Business Impact:
Makes AI learn faster and better by fixing its layers.
Scaling Large Language Models (LLMs) yields performance gains but incurs substantial training costs. Depth up-scaling offers training efficiency by adding new layers to pre-trained models. However, most existing methods copy or average weights from base layers, neglecting neuron permutation differences. This limitation can potentially cause misalignment that harms performance. Inspired by applying Optimal Transport (OT) for neuron alignment, we propose Optimal Transport Depth Up-Scaling (OpT-DeUS). OpT-DeUS aligns and fuses Transformer blocks in adjacent base layers via OT for new layer creation, to mitigate neuron permutation mismatch between layers. OpT-DeUS achieves better overall performance and offers improved training efficiency than existing methods for continual pre-training and supervised fine-tuning across different model sizes. To further evaluate the impact of interpolation positions, our extensive analysis shows that inserting new layers closer to the top results in higher training efficiency due to shorter back-propagation time while obtaining additional performance gains.
Similar Papers
Self-Composing Neural Operators with Depth and Accuracy Scaling via Adaptive Train-and-Unroll Approach
Machine Learning (CS)
Makes computer models solve hard science problems faster.
Deep Progressive Training: scaling up depth capacity of zero/one-layer models
Machine Learning (CS)
Trains big computer brains faster, saving energy.
Unsupervised Learning for Optimal Transport plan prediction between unbalanced graphs
Machine Learning (CS)
Makes computers compare big networks much faster.