Layer-wise Adaptive Gradient Norm Penalizing Method for Efficient and Accurate Deep Learning
By: Sunwoo Lee
Potential Business Impact:
Makes smart computer programs learn better, faster.
Sharpness-aware minimization (SAM) is known to improve the generalization performance of neural networks. However, it is not widely used in real-world applications yet due to its expensive model perturbation cost. A few variants of SAM have been proposed to tackle such an issue, but they commonly do not alleviate the cost noticeably. In this paper, we propose a lightweight layer-wise gradient norm penalizing method that tackles the expensive computational cost of SAM while maintaining its superior generalization performance. Our study empirically proves that the gradient norm of the whole model can be effectively suppressed by penalizing the gradient norm of only a few critical layers. We also theoretically show that such a partial model perturbation does not harm the convergence rate of SAM, allowing them to be safely adapted in real-world applications. To demonstrate the efficacy of the proposed method, we perform extensive experiments comparing the proposed method to mini-batch SGD and the conventional SAM using representative computer vision and language modeling benchmarks.
Similar Papers
Asynchronous Sharpness-Aware Minimization For Fast and Accurate Deep Learning
Machine Learning (CS)
Makes smart computer programs learn faster and better.
Sharpness-Aware Minimization: General Analysis and Improved Rates
Optimization and Control
Makes computer learning models work better.
Unveiling m-Sharpness Through the Structure of Stochastic Gradient Noise
Machine Learning (CS)
Makes computer learning models work better.