MINR: Efficient Implicit Neural Representations for Multi-Image Encoding
By: Wenyong Zhou , Taiqiang Wu , Zhengwu Liu and more
Potential Business Impact:
Saves space by sharing computer brain parts.
Implicit Neural Representations (INRs) aim to parameterize discrete signals through implicit continuous functions. However, formulating each image with a separate neural network~(typically, a Multi-Layer Perceptron (MLP)) leads to computational and storage inefficiencies when encoding multi-images. To address this issue, we propose MINR, sharing specific layers to encode multi-image efficiently. We first compare the layer-wise weight distributions for several trained INRs and find that corresponding intermediate layers follow highly similar distribution patterns. Motivated by this, we share these intermediate layers across multiple images while preserving the input and output layers as input-specific. In addition, we design an extra novel projection layer for each image to capture its unique features. Experimental results on image reconstruction and super-resolution tasks demonstrate that MINR can save up to 60\% parameters while maintaining comparable performance. Particularly, MINR scales effectively to handle 100 images, maintaining an average peak signal-to-noise ratio (PSNR) of 34 dB. Further analysis of various backbones proves the robustness of the proposed MINR.
Similar Papers
Enhancing Robustness of Implicit Neural Representations Against Weight Perturbations
CV and Pattern Recognition
Makes AI models harder to trick with bad data.
Split-Layer: Enhancing Implicit Neural Representation by Maximizing the Dimensionality of Feature Space
CV and Pattern Recognition
Makes AI understand complex shapes and images better.
I-INR: Iterative Implicit Neural Representations
CV and Pattern Recognition
Improves pictures by adding back lost details.