Score: 1

Improving Accuracy and Efficiency of Implicit Neural Representations: Making SIREN a WINNER

Published: September 16, 2025 | arXiv ID: 2509.12980v1

By: Hemanth Chandravamsi, Dhanush V. Shenoy, Steven H. Frankel

Potential Business Impact:

Makes AI better at drawing and understanding images.

Business Areas:
Speech Recognition Data and Analytics, Software

We identify and address a fundamental limitation of sinusoidal representation networks (SIRENs), a class of implicit neural representations. SIRENs Sitzmann et al. (2020), when not initialized appropriately, can struggle at fitting signals that fall outside their frequency support. In extreme cases, when the network's frequency support misaligns with the target spectrum, a 'spectral bottleneck' phenomenon is observed, where the model yields to a near-zero output and fails to recover even the frequency components that are within its representational capacity. To overcome this, we propose WINNER - Weight Initialization with Noise for Neural Representations. WINNER perturbs uniformly initialized weights of base SIREN with Gaussian noise - whose noise scales are adaptively determined by the spectral centroid of the target signal. Similar to random Fourier embeddings, this mitigates 'spectral bias' but without introducing additional trainable parameters. Our method achieves state-of-the-art audio fitting and significant gains in image and 3D shape fitting tasks over base SIREN. Beyond signal fitting, WINNER suggests new avenues in adaptive, target-aware initialization strategies for optimizing deep neural network training. For code and data visit cfdlabtechnion.github.io/siren_square/.

Page Count
10 pages

Category
Computer Science:
CV and Pattern Recognition