Score: 0

MINR: Efficient Implicit Neural Representations for Multi-Image Encoding

Published: August 19, 2025 | arXiv ID: 2508.13471v1

By: Wenyong Zhou , Taiqiang Wu , Zhengwu Liu and more

Potential Business Impact:

Saves space by sharing computer brain parts.

Implicit Neural Representations (INRs) aim to parameterize discrete signals through implicit continuous functions. However, formulating each image with a separate neural network~(typically, a Multi-Layer Perceptron (MLP)) leads to computational and storage inefficiencies when encoding multi-images. To address this issue, we propose MINR, sharing specific layers to encode multi-image efficiently. We first compare the layer-wise weight distributions for several trained INRs and find that corresponding intermediate layers follow highly similar distribution patterns. Motivated by this, we share these intermediate layers across multiple images while preserving the input and output layers as input-specific. In addition, we design an extra novel projection layer for each image to capture its unique features. Experimental results on image reconstruction and super-resolution tasks demonstrate that MINR can save up to 60\% parameters while maintaining comparable performance. Particularly, MINR scales effectively to handle 100 images, maintaining an average peak signal-to-noise ratio (PSNR) of 34 dB. Further analysis of various backbones proves the robustness of the proposed MINR.

Country of Origin
🇭🇰 Hong Kong

Page Count
5 pages

Category
Computer Science:
CV and Pattern Recognition