Don't be lazy: CompleteP enables compute-efficient deep transformers
By: Nolan Dey , Bin Claire Zhang , Lorenzo Noci and more
Potential Business Impact:
Makes AI learn faster and better.
We study compute efficiency of LLM training when using different parameterizations, i.e., rules for adjusting model and optimizer hyperparameters (HPs) as model size changes. Some parameterizations fail to transfer optimal base HPs (such as learning rate) across changes in model depth, requiring practitioners to either re-tune these HPs as they scale up (expensive), or accept sub-optimal training when re-tuning is prohibitive. Even when they achieve HP transfer, we develop theory to show parameterizations may still exist in the lazy learning regime where layers learn only features close to their linearization, preventing effective use of depth and nonlinearity. Finally, we identify and adopt the parameterization we call CompleteP that achieves both depth-wise HP transfer and non-lazy learning in all layers. CompleteP enables a wider range of model width/depth ratios to remain compute-efficient, unlocking shapes better suited for different hardware settings and operational contexts. Moreover, CompleteP enables 12-34% compute efficiency improvements over the prior state-of-the-art.
Similar Papers
Deep Progressive Training: scaling up depth capacity of zero/one-layer models
Machine Learning (CS)
Trains big computer brains faster, saving energy.
Hyperparameter Transfer Enables Consistent Gains of Matrix-Preconditioned Optimizers Across Scales
Machine Learning (CS)
Makes computer learning models train much faster.
Beyond Real Weights: Hypercomplex Representations for Stable Quantization
CV and Pattern Recognition
Makes smart AI models smaller and faster.