Score: 0

I-INR: Iterative Implicit Neural Representations

Published: April 24, 2025 | arXiv ID: 2504.17364v2

By: Ali Haider , Muhammad Salman Ali , Maryam Qamar and more

Potential Business Impact:

Improves pictures by adding back lost details.

Business Areas:
Image Recognition Data and Analytics, Software

Implicit Neural Representations (INRs) have revolutionized signal processing and computer vision by modeling signals as continuous, differentiable functions parameterized by neural networks. However, their inherent formulation as a regression problem makes them prone to regression to the mean, limiting their ability to capture fine details, retain high-frequency information, and handle noise effectively. To address these challenges, we propose Iterative Implicit Neural Representations (I-INRs) a novel plug-and-play framework that enhances signal reconstruction through an iterative refinement process. I-INRs effectively recover high-frequency details, improve robustness to noise, and achieve superior reconstruction quality. Our framework seamlessly integrates with existing INR architectures, delivering substantial performance gains across various tasks. Extensive experiments show that I-INRs outperform baseline methods, including WIRE, SIREN, and Gauss, in diverse computer vision applications such as image restoration, image denoising, and object occupancy prediction.

Country of Origin
🇰🇷 Korea, Republic of

Page Count
16 pages

Category
Computer Science:
CV and Pattern Recognition