Depth-Aware Initialization for Stable and Efficient Neural Network Training
By: Vijay Pandey
Potential Business Impact:
Makes computer brains learn faster and better.
In past few years, various initialization schemes have been proposed. These schemes are glorot initialization, He initialization, initialization using orthogonal matrix, random walk method for initialization. Some of these methods stress on keeping unit variance of activation and gradient propagation through the network layer. Few of these methods are independent of the depth information while some methods has considered the total network depth for better initialization. In this paper, comprehensive study has been done where depth information of each layer as well as total network is incorporated for better initialization scheme. It has also been studied that for deeper networks theoretical assumption of unit variance throughout the network does not perform well. It requires the need to increase the variance of the network from first layer activation to last layer activation. We proposed a novel way to increase the variance of the network in flexible manner, which incorporates the information of each layer depth. Experiments shows that proposed method performs better than the existing initialization scheme.
Similar Papers
Weight Initialization and Variance Dynamics in Deep Neural Networks and Large Language Models
Machine Learning (CS)
Makes computer learning faster and more stable.
Optimized Weight Initialization on the Stiefel Manifold for Deep ReLU Neural Networks
Machine Learning (CS)
Keeps computer brains from breaking when learning.
Sinusoidal Initialization, Time for a New Start
Machine Learning (CS)
Makes computer brains learn much faster and better.