Score: 0

Weight Initialization and Variance Dynamics in Deep Neural Networks and Large Language Models

Published: October 10, 2025 | arXiv ID: 2510.09423v1

By: Yankun Han

Potential Business Impact:

Makes computer learning faster and more stable.

Business Areas:
A/B Testing Data and Analytics

Weight initialization governs signal propagation and gradient flow at the start of training. This paper offers a theory-grounded and empirically validated study across two regimes: compact ReLU multilayer perceptrons and GPT-2-style transformers. First, a logarithmic sweep of the initial standard deviation maps vanishing and exploding regimes and identifies a broad stability band with standard deviations between 1e-2 and 1e-1. Second, a controlled comparison shows that Kaiming (fan-in) initialization converges faster and more stably than Xavier under ReLU, consistent with variance-preserving theory. Third, in a from-scratch 12-layer GPT-2-style model, this paper tracks layerwise Q/K/V weight variance through pretraining and observe depth-dependent equilibration into narrow bands: shallow layers expand rapidly while deeper layers change more gradually. Together, these results connect classic initialization principles with modern transformer behavior and yield simple, practical recipes for robust training.

Page Count
5 pages

Category
Computer Science:
Machine Learning (CS)