Split-Layer: Enhancing Implicit Neural Representation by Maximizing the Dimensionality of Feature Space
By: Zhicheng Cai , Hao Zhu , Linsen Chen and more
Potential Business Impact:
Makes AI understand complex shapes and images better.
Implicit neural representation (INR) models signals as continuous functions using neural networks, offering efficient and differentiable optimization for inverse problems across diverse disciplines. However, the representational capacity of INR defined by the range of functions the neural network can characterize, is inherently limited by the low-dimensional feature space in conventional multilayer perceptron (MLP) architectures. While widening the MLP can linearly increase feature space dimensionality, it also leads to a quadratic growth in computational and memory costs. To address this limitation, we propose the split-layer, a novel reformulation of MLP construction. The split-layer divides each layer into multiple parallel branches and integrates their outputs via Hadamard product, effectively constructing a high-degree polynomial space. This approach significantly enhances INR's representational capacity by expanding the feature space dimensionality without incurring prohibitive computational overhead. Extensive experiments demonstrate that the split-layer substantially improves INR performance, surpassing existing methods across multiple tasks, including 2D image fitting, 2D CT reconstruction, 3D shape representation, and 5D novel view synthesis.
Similar Papers
Scaling Implicit Fields via Hypernetwork-Driven Multiscale Coordinate Transformations
Artificial Intelligence
Makes computer pictures clearer with less data.
MINR: Efficient Implicit Neural Representations for Multi-Image Encoding
CV and Pattern Recognition
Saves space by sharing computer brain parts.
Detail Across Scales: Multi-Scale Enhancement for Full Spectrum Neural Representations
Machine Learning (CS)
Stores detailed pictures using less computer space.