Score: 0

MINR: Implicit Neural Representations with Masked Image Modelling

Published: July 30, 2025 | arXiv ID: 2507.22404v1

By: Sua Lee, Joonhun Lee, Myungjoo Kang

Potential Business Impact:

Teaches computers to see better, even new things.

Business Areas:
Image Recognition Data and Analytics, Software

Self-supervised learning methods like masked autoencoders (MAE) have shown significant promise in learning robust feature representations, particularly in image reconstruction-based pretraining task. However, their performance is often strongly dependent on the masking strategies used during training and can degrade when applied to out-of-distribution data. To address these limitations, we introduce the masked implicit neural representations (MINR) framework that synergizes implicit neural representations with masked image modeling. MINR learns a continuous function to represent images, enabling more robust and generalizable reconstructions irrespective of masking strategies. Our experiments demonstrate that MINR not only outperforms MAE in in-domain scenarios but also in out-of-distribution settings, while reducing model complexity. The versatility of MINR extends to various self-supervised learning applications, confirming its utility as a robust and efficient alternative to existing frameworks.

Country of Origin
🇰🇷 Korea, Republic of

Page Count
6 pages

Category
Computer Science:
CV and Pattern Recognition