Score: 0

Optimizing Rank for High-Fidelity Implicit Neural Representations

Published: December 16, 2025 | arXiv ID: 2512.14366v1

By: Julian McGinnis , Florian A. Hölzl , Suprosanna Shit and more

Potential Business Impact:

Makes simple computer brains show sharp, detailed pictures.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Implicit Neural Representations (INRs) based on vanilla Multi-Layer Perceptrons (MLPs) are widely believed to be incapable of representing high-frequency content. This has directed research efforts towards architectural interventions, such as coordinate embeddings or specialized activation functions, to represent high-frequency signals. In this paper, we challenge the notion that the low-frequency bias of vanilla MLPs is an intrinsic, architectural limitation to learn high-frequency content, but instead a symptom of stable rank degradation during training. We empirically demonstrate that regulating the network's rank during training substantially improves the fidelity of the learned signal, rendering even simple MLP architectures expressive. Extensive experiments show that using optimizers like Muon, with high-rank, near-orthogonal updates, consistently enhances INR architectures even beyond simple ReLU MLPs. These substantial improvements hold across a diverse range of domains, including natural and medical images, and novel view synthesis, with up to 9 dB PSNR improvements over the previous state-of-the-art. Our project page, which includes code and experimental results, is available at: (https://muon-inrs.github.io).

Country of Origin
🇩🇪 Germany

Page Count
23 pages

Category
Computer Science:
CV and Pattern Recognition