An Exploration of Non-Euclidean Gradient Descent: Muon and its Many Variants
By: Michael Crawshaw , Chirag Modi , Mingrui Liu and more
Potential Business Impact:
Makes computer learning faster and more reliable.
To define a steepest descent method over a neural network, we need to choose a norm for each layer, a way to aggregate these norms across layers, and whether to use normalization. We systematically explore different alternatives for aggregating norms across layers, both formalizing existing combinations of Adam and the recently proposed Muon as a type of non-Euclidean gradient descent, and deriving new variants of the Muon optimizer. Through a comprehensive experimental evaluation of the optimizers within our framework, we find that Muon is sensitive to the choice of learning rate, whereas a new variant we call MuonMax is significantly more robust. We then show how to combine any non-Euclidean gradient method with model based momentum (known as Momo). The new Momo variants of Muon are significantly more robust to hyperparameter tuning, and often achieve a better validation score. Thus for new tasks, where the optimal hyperparameters are not known, we advocate for using Momo in combination with MuonMax to save on costly hyperparameter tuning.
Similar Papers
NorMuon: Making Muon more efficient and scalable
Machine Learning (CS)
Makes AI learn faster and better.
Error Feedback for Muon and Friends
Machine Learning (CS)
Makes AI learn faster with less data sent.
Drop-Muon: Update Less, Converge Faster
Machine Learning (CS)
Trains computer brains much faster.