Iterative Orthogonalization Scaling Laws
By: Devan Selvaraj
Potential Business Impact:
Makes computer learning faster and more stable.
The muon optimizer has picked up much attention as of late as a possible replacement to the seemingly omnipresent Adam optimizer. Recently, care has been taken to document the scaling laws of hyper-parameters under muon such as weight decay and learning rate. However, at much larger scales the iterative orthogonalization procedure present in muon may suffer a possible issue as the singular values of random matrices shrink with scale. This paper shows this scaling behavior theoretically and empirically on random matrices but does not suggest what to do about it.
Similar Papers
NorMuon: Making Muon more efficient and scalable
Machine Learning (CS)
Makes AI learn faster and better.
Low-rank Orthogonalization for Large-scale Matrix Optimization with Applications to Foundation Model Training
Machine Learning (CS)
Makes AI learn faster and better.
AdaGrad Meets Muon: Adaptive Stepsizes for Orthogonal Updates
Machine Learning (CS)
Makes computer learning faster and better.