BOOST: BOttleneck-Optimized Scalable Training Framework for Low-Rank Large Language Models
By: Zhengyang Wang , Ziyue Liu , Ruijie Zhang and more
The scale of transformer model pre-training is constrained by the increasing computation and communication cost. Low-rank bottleneck architectures offer a promising solution to significantly reduce the training time and memory footprint with minimum impact on accuracy. Despite algorithmic efficiency, bottleneck architectures scale poorly under standard tensor parallelism. Simply applying 3D parallelism designed for full-rank methods leads to excessive communication and poor GPU utilization. To address this limitation, we propose BOOST, an efficient training framework tailored for large-scale low-rank bottleneck architectures. BOOST introduces a novel Bottleneck-aware Tensor Parallelism, and combines optimizations such as online-RMSNorm, linear layer grouping, and low-rank activation checkpointing to achieve end-to-end training speedup. Evaluations on different low-rank bottleneck architectures demonstrate that BOOST achieves 1.46-1.91$\times$ speedup over full-rank model baselines and 1.87-2.27$\times$ speedup over low-rank model with naively integrated 3D parallelism, with improved GPU utilization and reduced communication overhead.
Similar Papers
Speed Always Wins: A Survey on Efficient Architectures for Large Language Models
Computation and Language
Makes AI smarter and faster to use.
Scaling Performance of Large Language Model Pretraining
Distributed, Parallel, and Cluster Computing
Teaches computers to learn faster with less power.
Scaling Intelligence: Designing Data Centers for Next-Gen Language Models
Hardware Architecture
Builds faster, cheaper computer centers for giant AI.