Drop-Muon: Update Less, Converge Faster
By: Kaja Gruntkowska , Yassine Maziane , Zheng Qu and more
Potential Business Impact:
Trains computer brains much faster.
Conventional wisdom in deep learning optimization dictates updating all layers at every step-a principle followed by all recent state-of-the-art optimizers such as Muon. In this work, we challenge this assumption, showing that full-network updates can be fundamentally suboptimal, both in theory and in practice. We introduce a non-Euclidean Randomized Progressive Training method-Drop-Muon-a simple yet powerful framework that updates only a subset of layers per step according to a randomized schedule, combining the efficiency of progressive training with layer-specific non-Euclidean updates for top-tier performance. We provide rigorous convergence guarantees under both layer-wise smoothness and layer-wise $(L^0, L^1)$-smoothness, covering deterministic and stochastic gradient settings, marking the first such results for progressive training in the stochastic and non-smooth regime. Our cost analysis further reveals that full-network updates are not optimal unless a very specific relationship between layer smoothness constants holds. Through controlled CNN experiments, we empirically demonstrate that Drop-Muon consistently outperforms full-network Muon, achieving the same accuracy up to $1.4\times$ faster in wall-clock time. Together, our results suggest a shift in how large-scale models can be efficiently trained, challenging the status quo and offering a highly efficient, theoretically grounded alternative to full-network updates.
Similar Papers
An Exploration of Non-Euclidean Gradient Descent: Muon and its Many Variants
Machine Learning (CS)
Makes computer learning faster and more reliable.
NorMuon: Making Muon more efficient and scalable
Machine Learning (CS)
Makes AI learn faster and better.
Beyond the Ideal: Analyzing the Inexact Muon Update
Machine Learning (CS)
Makes AI learn faster and better.