Layer-Parallel Training for Transformers
By: Shuai Jiang , Marc Salvado , Eric C. Cyr and more
We present a new training methodology for transformers using a multilevel, layer-parallel approach. Through a neural ODE formulation of transformers, our application of a multilevel parallel-in-time algorithm for the forward and backpropagation phases of training achieves parallel acceleration over the layer dimension. This dramatically enhances parallel scalability as the network depth increases, which is particularly useful for increasingly large foundational models. However, achieving this introduces errors that cause systematic bias in the gradients, which in turn reduces convergence when closer to the minima. We develop an algorithm to detect this critical transition and either switch to serial training or systematically increase the accuracy of layer-parallel training. Results, including BERT, GPT2, ViT, and machine translation architectures, demonstrate parallel-acceleration as well as accuracy commensurate with serial pre-training while fine-tuning is unaffected.
Similar Papers
A multilevel approach to accelerate the training of Transformers
Machine Learning (CS)
Makes computer learning models train much faster.
Deep learning for pedestrians: backpropagation in Transformers
Machine Learning (CS)
Teaches computers how to learn better and faster.
Parallel BiLSTM-Transformer networks for forecasting chaotic dynamics
Machine Learning (CS)
Predicts chaotic systems better than before.