Score: 1

An Exploration of Non-Euclidean Gradient Descent: Muon and its Many Variants

Published: October 10, 2025 | arXiv ID: 2510.09827v1

By: Michael Crawshaw , Chirag Modi , Mingrui Liu and more

Potential Business Impact:

Makes computer learning faster and more reliable.

Business Areas:
A/B Testing Data and Analytics

To define a steepest descent method over a neural network, we need to choose a norm for each layer, a way to aggregate these norms across layers, and whether to use normalization. We systematically explore different alternatives for aggregating norms across layers, both formalizing existing combinations of Adam and the recently proposed Muon as a type of non-Euclidean gradient descent, and deriving new variants of the Muon optimizer. Through a comprehensive experimental evaluation of the optimizers within our framework, we find that Muon is sensitive to the choice of learning rate, whereas a new variant we call MuonMax is significantly more robust. We then show how to combine any non-Euclidean gradient method with model based momentum (known as Momo). The new Momo variants of Muon are significantly more robust to hyperparameter tuning, and often achieve a better validation score. Thus for new tasks, where the optimal hyperparameters are not known, we advocate for using Momo in combination with MuonMax to save on costly hyperparameter tuning.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
29 pages

Category
Computer Science:
Machine Learning (CS)