Weight Initialization and Variance Dynamics in Deep Neural Networks and Large Language Models
By: Yankun Han
Potential Business Impact:
Makes computer learning faster and more stable.
Weight initialization governs signal propagation and gradient flow at the start of training. This paper offers a theory-grounded and empirically validated study across two regimes: compact ReLU multilayer perceptrons and GPT-2-style transformers. First, a logarithmic sweep of the initial standard deviation maps vanishing and exploding regimes and identifies a broad stability band with standard deviations between 1e-2 and 1e-1. Second, a controlled comparison shows that Kaiming (fan-in) initialization converges faster and more stably than Xavier under ReLU, consistent with variance-preserving theory. Third, in a from-scratch 12-layer GPT-2-style model, this paper tracks layerwise Q/K/V weight variance through pretraining and observe depth-dependent equilibration into narrow bands: shallow layers expand rapidly while deeper layers change more gradually. Together, these results connect classic initialization principles with modern transformer behavior and yield simple, practical recipes for robust training.
Similar Papers
Depth-Aware Initialization for Stable and Efficient Neural Network Training
Machine Learning (CS)
Makes computer brains learn faster and better.
Optimal Condition for Initialization Variance in Deep Neural Networks: An SGD Dynamics Perspective
Machine Learning (Stat)
Sets computer learning starting numbers for better results.
Starting Positions Matter: A Study on Better Weight Initialization for Neural Network Quantization
CV and Pattern Recognition
Makes computer brains work better with less data.