I-INR: Iterative Implicit Neural Representations
By: Ali Haider , Muhammad Salman Ali , Maryam Qamar and more
Potential Business Impact:
Improves pictures by adding back lost details.
Implicit Neural Representations (INRs) have revolutionized signal processing and computer vision by modeling signals as continuous, differentiable functions parameterized by neural networks. However, their inherent formulation as a regression problem makes them prone to regression to the mean, limiting their ability to capture fine details, retain high-frequency information, and handle noise effectively. To address these challenges, we propose Iterative Implicit Neural Representations (I-INRs) a novel plug-and-play framework that enhances signal reconstruction through an iterative refinement process. I-INRs effectively recover high-frequency details, improve robustness to noise, and achieve superior reconstruction quality. Our framework seamlessly integrates with existing INR architectures, delivering substantial performance gains across various tasks. Extensive experiments show that I-INRs outperform baseline methods, including WIRE, SIREN, and Gauss, in diverse computer vision applications such as image restoration, image denoising, and object occupancy prediction.
Similar Papers
Accelerated Optimization of Implicit Neural Representations for CT Reconstruction
Image and Video Processing
Makes X-ray scans faster and clearer.
Sampling Theory for Super-Resolution with Implicit Neural Representations
Image and Video Processing
Makes blurry pictures sharp again.
Scaling Implicit Fields via Hypernetwork-Driven Multiscale Coordinate Transformations
Artificial Intelligence
Makes computer pictures clearer with less data.