Through the River: Understanding the Benefit of Schedule-Free Methods for Language Model Training
By: Minhak Song , Beomhan Baek , Kwangjun Ahn and more
Potential Business Impact:
Makes computer learning faster and better.
As both model and dataset sizes continue to scale rapidly, conventional pretraining strategies with fixed compute budgets-such as cosine learning rate schedules-are increasingly inadequate for large-scale training. Recent alternatives, including warmup-stable-decay (WSD) schedules and weight averaging, offer greater flexibility. However, WSD relies on explicit decay phases to track progress, while weight averaging addresses this limitation at the cost of additional memory. In search of a more principled and scalable alternative, we revisit the Schedule-Free (SF) method [Defazio et al., 2024], which has shown strong empirical performance across diverse settings. We show that SF-AdamW effectively navigates the "river" structure of the loss landscape without decay phases or auxiliary averaging, making it particularly suitable for continuously scaling training workloads. To understand this behavior, we conduct a theoretical and empirical analysis of SF dynamics, revealing that it implicitly performs weight averaging without memory overhead. Guided by this analysis, we propose a refined variant of SF that improves robustness to momentum and performs better under large batch sizes, addressing key limitations of the original method. Together, these results establish SF as a practical, scalable, and theoretically grounded approach for language model training.
Similar Papers
WSM: Decay-Free Learning Rate Schedule via Checkpoint Merging for LLM Pre-training
Computation and Language
Makes computer learning faster and better.
WSM: Decay-Free Learning Rate Schedule via Checkpoint Merging for LLM Pre-training
Computation and Language
Makes computer learning better by combining models.
Unveiling the Role of Learning Rate Schedules via Functional Scaling Laws
Machine Learning (CS)
Helps computers learn faster and better.