VI3NR: Variance Informed Initialization for Implicit Neural Representations
By: Chamin Hewa Koneputugodage , Yizhak Ben-Shabat , Sameera Ramasinghe and more
Potential Business Impact:
Makes AI learn better from images, sound, and shapes.
Implicit Neural Representations (INRs) are a versatile and powerful tool for encoding various forms of data, including images, videos, sound, and 3D shapes. A critical factor in the success of INRs is the initialization of the network, which can significantly impact the convergence and accuracy of the learned model. Unfortunately, commonly used neural network initializations are not widely applicable for many activation functions, especially those used by INRs. In this paper, we improve upon previous initialization methods by deriving an initialization that has stable variance across layers, and applies to any activation function. We show that this generalizes many previous initialization methods, and has even better stability for well studied activations. We also show that our initialization leads to improved results with INR activation functions in multiple signal modalities. Our approach is particularly effective for Gaussian INRs, where we demonstrate that the theory of our initialization matches with task performance in multiple experiments, allowing us to achieve improvements in image, audio, and 3D surface reconstruction.
Similar Papers
I-INR: Iterative Implicit Neural Representations
CV and Pattern Recognition
Improves pictures by adding back lost details.
Accelerated Optimization of Implicit Neural Representations for CT Reconstruction
Image and Video Processing
Makes X-ray scans faster and clearer.
Temporal Variational Implicit Neural Representations
Machine Learning (CS)
Predicts missing data in messy timelines instantly.