Fourier Learning Machines: Nonharmonic Fourier-Based Neural Networks for Scientific Machine Learning
By: Mominul Rubel, Adam Meyers, Gabriel Nicolosi
Potential Business Impact:
Learns complex patterns by breaking them into simple waves.
We introduce the Fourier Learning Machine (FLM), a neural network (NN) architecture designed to represent a multidimensional nonharmonic Fourier series. The FLM uses a simple feedforward structure with cosine activation functions to learn the frequencies, amplitudes, and phase shifts of the series as trainable parameters. This design allows the model to create a problem-specific spectral basis adaptable to both periodic and nonperiodic functions. Unlike previous Fourier-inspired NN models, the FLM is the first architecture able to represent a complete, separable Fourier basis in multiple dimensions using a standard Multilayer Perceptron-like architecture. A one-to-one correspondence between the Fourier coefficients and amplitudes and phase-shifts is demonstrated, allowing for the translation between a full, separable basis form and the cosine phase--shifted one. Additionally, we evaluate the performance of FLMs on several scientific computing problems, including benchmark Partial Differential Equations (PDEs) and a family of Optimal Control Problems (OCPs). Computational experiments show that the performance of FLMs is comparable, and often superior, to that of established architectures like SIREN and vanilla feedforward NNs.
Similar Papers
Fourier Feature Networks for High-Fidelity Prediction of Perturbed Optical Fields
Optics
Makes light bend predictably in tubes.
Fourier Neural Operators for Structural Dynamics Models: Challenges, Limitations and Advantages of Using a Spectrogram Loss
Computational Engineering, Finance, and Science
Makes computer models predict weather better.
From Taylor Series to Fourier Synthesis: The Periodic Linear Unit
Machine Learning (CS)
Makes AI smarter with fewer computer parts.