Insights from Gradient Dynamics: Gradient Autoscaled Normalization
By: Vincent-Daniel Yun
Potential Business Impact:
Makes computer learning more stable and accurate.
Gradient dynamics play a central role in determining the stability and generalization of deep neural networks. In this work, we provide an empirical analysis of how variance and standard deviation of gradients evolve during training, showing consistent changes across layers and at the global scale in convolutional networks. Motivated by these observations, we propose a hyperparameter-free gradient normalization method that aligns gradient scaling with their natural evolution. This approach prevents unintended amplification, stabilizes optimization, and preserves convergence guarantees. Experiments on the challenging CIFAR-100 benchmark with ResNet-20, ResNet-56, and VGG-16-BN demonstrate that our method maintains or improves test accuracy even under strong generalization. Beyond practical performance, our study highlights the importance of directly tracking gradient dynamics, aiming to bridge the gap between theoretical expectations and empirical behaviors, and to provide insights for future optimization research.
Similar Papers
Insights from Gradient Dynamics: Gradient Autoscaled Normalization
Machine Learning (CS)
Makes computer learning more stable and accurate.
Analytic and Variational Stability of Deep Learning Systems
Machine Learning (CS)
Makes AI learn better and stay stable.
Stable Gradients for Stable Learning at Scale in Deep Reinforcement Learning
Machine Learning (CS)
Makes smart computer programs learn much better.