Score: 0

Enhancing Robustness of Implicit Neural Representations Against Weight Perturbations

Published: August 19, 2025 | arXiv ID: 2508.13481v1

By: Wenyong Zhou , Yuxin Cheng , Zhengwu Liu and more

Potential Business Impact:

Makes AI models harder to trick with bad data.

Implicit Neural Representations (INRs) encode discrete signals in a continuous manner using neural networks, demonstrating significant value across various multimedia applications. However, the vulnerability of INRs presents a critical challenge for their real-world deployments, as the network weights might be subjected to unavoidable perturbations. In this work, we investigate the robustness of INRs for the first time and find that even minor perturbations can lead to substantial performance degradation in the quality of signal reconstruction. To mitigate this issue, we formulate the robustness problem in INRs by minimizing the difference between loss with and without weight perturbations. Furthermore, we derive a novel robust loss function to regulate the gradient of the reconstruction loss with respect to weights, thereby enhancing the robustness. Extensive experiments on reconstruction tasks across multiple modalities demonstrate that our method achieves up to a 7.5~dB improvement in peak signal-to-noise ratio (PSNR) values compared to original INRs under noisy conditions.

Country of Origin
🇭🇰 Hong Kong

Page Count
5 pages

Category
Computer Science:
CV and Pattern Recognition