Score: 0

Neural network initialization with nonlinear characteristics and information on spectral bias

Published: November 4, 2025 | arXiv ID: 2511.02244v1

By: Hikaru Homma, Jun Ohkubo

Potential Business Impact:

Teaches computers to learn faster and better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Initialization of neural network parameters, such as weights and biases, has a crucial impact on learning performance; if chosen well, we can even avoid the need for additional training with backpropagation. For example, algorithms based on the ridgelet transform or the SWIM (sampling where it matters) concept have been proposed for initialization. On the other hand, it is well-known that neural networks tend to learn coarse information in the earlier layers. The feature is called spectral bias. In this work, we investigate the effects of utilizing information on the spectral bias in the initialization of neural networks. Hence, we propose a framework that adjusts the scale factors in the SWIM algorithm to capture low-frequency components in the early-stage hidden layers and to represent high-frequency components in the late-stage hidden layers. Numerical experiments on a one-dimensional regression task and the MNIST classification task demonstrate that the proposed method outperforms the conventional initialization algorithms. This work clarifies the importance of intrinsic spectral properties in learning neural networks, and the finding yields an effective parameter initialization strategy that enhances their training performance.

Page Count
7 pages

Category
Computer Science:
Machine Learning (CS)