Turbo-Muon: Accelerating Orthogonality-Based Optimization with Pre-Conditioning
By: Thibaut Boissin , Thomas Massena , Franck Mamalet and more
Potential Business Impact:
Makes computer training faster and better.
Orthogonality-based optimizers, such as Muon, have recently shown strong performance across large-scale training and community-driven efficiency challenges. However, these methods rely on a costly gradient orthogonalization step. Even efficient iterative approximations such as Newton-Schulz remain expensive, typically requiring dozens of matrix multiplications to converge. We introduce a preconditioning procedure that accelerates Newton-Schulz convergence and reduces its computational cost. We evaluate its impact and show that the overhead of our preconditioning can be made negligible. Furthermore, the faster convergence it enables allows us to remove one iteration out of the usual five without degrading approximation quality. Our publicly available implementation achieves up to a 2.8x speedup in the Newton-Schulz approximation. We also show that this has a direct impact on end-to-end training runtime with 5-10% improvement in realistic training scenarios across two efficiency-focused tasks. On challenging language or vision tasks, we validate that our method maintains equal or superior model performance while improving runtime. Crucially, these improvements require no hyperparameter tuning and can be adopted as a simple drop-in replacement. Our code is publicly available on github.
Similar Papers
Low-rank Orthogonalization for Large-scale Matrix Optimization with Applications to Foundation Model Training
Machine Learning (CS)
Makes AI learn faster and better.
NorMuon: Making Muon more efficient and scalable
Machine Learning (CS)
Makes AI learn faster and better.
The Potential of Second-Order Optimization for LLMs: A Study with Full Gauss-Newton
Machine Learning (CS)
Trains AI models much faster using smart math.