A new initialisation to Control Gradients in Sinusoidal Neural network
By: Andrea Combette, Antoine Venaille, Nelly Pustelnik
Potential Business Impact:
Makes AI learn better and faster.
Proper initialisation strategy is of primary importance to mitigate gradient explosion or vanishing when training neural networks. Yet, the impact of initialisation parameters still lacks a precise theoretical understanding for several well-established architectures. Here, we propose a new initialisation for networks with sinusoidal activation functions such as \texttt{SIREN}, focusing on gradients control, their scaling with network depth, their impact on training and on generalization. To achieve this, we identify a closed-form expression for the initialisation of the parameters, differing from the original \texttt{SIREN} scheme. This expression is derived from fixed points obtained through the convergence of pre-activation distribution and the variance of Jacobian sequences. Controlling both gradients and targeting vanishing pre-activation helps preventing the emergence of inappropriate frequencies during estimation, thereby improving generalization. We further show that this initialisation strongly influences training dynamics through the Neural Tangent Kernel framework (NTK). Finally, we benchmark \texttt{SIREN} with the proposed initialisation against the original scheme and other baselines on function fitting and image reconstruction. The new initialisation consistently outperforms state-of-the-art methods across a wide range of reconstruction tasks, including those involving physics-informed neural networks.
Similar Papers
Improving Accuracy and Efficiency of Implicit Neural Representations: Making SIREN a WINNER
CV and Pattern Recognition
Makes AI better at drawing and understanding images.
Neural network initialization with nonlinear characteristics and information on spectral bias
Machine Learning (CS)
Teaches computers to learn faster and better.
Weight Initialization and Variance Dynamics in Deep Neural Networks and Large Language Models
Machine Learning (CS)
Makes computer learning faster and more stable.