Score: 0

LSAM: Asynchronous Distributed Training with Landscape-Smoothed Sharpness-Aware Minimization

Published: September 3, 2025 | arXiv ID: 2509.03110v1

By: Yunfei Teng, Sixin Zhang

Potential Business Impact:

Makes AI learn better and faster.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

While Sharpness-Aware Minimization (SAM) improves generalization in deep neural networks by minimizing both loss and sharpness, it suffers from inefficiency in distributed large-batch training. We present Landscape-Smoothed SAM (LSAM), a novel optimizer that preserves SAM's generalization advantages while offering superior efficiency. LSAM integrates SAM's adversarial steps with an asynchronous distributed sampling strategy, generating an asynchronous distributed sampling scheme, producing a smoothed sharpness-aware loss landscape for optimization. This design eliminates synchronization bottlenecks, accelerates large-batch convergence, and delivers higher final accuracy compared to data-parallel SAM.

Country of Origin
🇨🇳 China

Page Count
21 pages

Category
Computer Science:
Machine Learning (CS)