Fourier Learning Machines: Nonharmonic Fourier-Based Neural Networks for Scientific Machine Learning

Published: 16 Dec 2025, Last Modified: 16 Dec 2025Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: We introduce the Fourier Learning Machine (FLM), a neural network (NN) architecture designed to represent a multidimensional nonharmonic Fourier series. The FLM uses a simple feedforward structure with cosine activation functions to learn the frequencies, amplitudes, and phase shifts of the series as trainable parameters. This design allows the model to create a problem--specific spectral basis adaptable to both periodic and nonperiodic functions. Unlike previous Fourier--inspired NN models, the FLM is the first architecture able to represent a multidimensional Fourier series with a complete set of basis functions in separable form, doing so by using a standard Multilayer Perceptron--like architecture. A one--to--one correspondence between the Fourier coefficients and amplitudes and phase-shifts is demonstrated, allowing for the translation between a full, separable basis form and the cosine phase--shifted one. Additionally, we evaluate the performance of FLMs on several scientific computing problems, including benchmark Partial Differential Equations (PDEs) and a family of Optimal Control Problems (OCPs). Computational experiments show that the performance of FLMs is comparable, and often superior, to that of established architectures like SIREN and vanilla feedforward NNs.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: Changes are listed as follows: - We have added a paragraph in the Literature Review section comparing FLMs with Sparse Spectrum Gaussian Processes (SSGP). - Added a set of figures to show how the number of sub-networks affect the FLM approximation in Appendix C. - We also have added a brief paragraph on future work directions. - We have fixed some minor typos.
Assigned Action Editor: ~William_T_Redman1
Submission Number: 5887
Loading