The Vekua Layer: Exact Physical Priors for Implicit Neural Representations via Generalized Analytic Functions
By: Vladimer Khasia
Implicit Neural Representations (INRs) have emerged as a powerful paradigm for parameterizing physical fields, yet they often suffer from spectral bias and the computational expense of non-convex optimization. We introduce the Vekua Layer (VL), a differentiable spectral method grounded in the classical theory of Generalized Analytic Functions. By restricting the hypothesis space to the kernel of the governing differential operator -- specifically utilizing Harmonic and Fourier-Bessel bases -- the VL transforms the learning task from iterative gradient descent to a strictly convex least-squares problem solved via linear projection. We evaluate the VL against Sinusoidal Representation Networks (SIRENs) on homogeneous elliptic Partial Differential Equations (PDEs). Our results demonstrate that the VL achieves machine precision ($\text{MSE} \approx 10^{-33}$) on exact reconstruction tasks and exhibits superior stability in the presence of incoherent sensor noise ($\text{MSE} \approx 0.03$), effectively acting as a physics-informed spectral filter. Furthermore, we show that the VL enables "holographic" extrapolation of global fields from partial boundary data via analytic continuation, a capability absent in standard coordinate-based approximations.
Similar Papers
Split-Layer: Enhancing Implicit Neural Representation by Maximizing the Dimensionality of Feature Space
CV and Pattern Recognition
Makes AI understand complex shapes and images better.
PDEfuncta: Spectrally-Aware Neural Representation for PDE Solution Modeling
Machine Learning (CS)
Learns many complex science problems with one AI.
Scaling Implicit Fields via Hypernetwork-Driven Multiscale Coordinate Transformations
Artificial Intelligence
Makes computer pictures clearer with less data.