Spectral-Bias and Kernel-Task Alignment in Physically Informed Neural Networks

21 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: PINNs, Kernels, Gaussian Processes, Learning Theory, Statistical Physics, Bayesian Inference
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: The neurally-informed equation: a new tool for predicting performance and spectral-bias in infinitely wide PINNs
Abstract: Physically informed neural networks (PINNs) are a promising emerging method for solving differential equations. As in many other deep learning approaches, the choice of PINN design and training protocol requires careful craftsmanship. Here, we suggest a comprehensive theoretical framework that sheds light on this important problem. Leveraging an equivalence between infinitely over-parameterized neural networks and Gaussian process regression (GPR), we derive an integro-differential equation that governs PINN prediction in the large data-set limit--- the neurally-informed equation. This equation augments the original one by a kernel term reflecting architecture choices. It allows quantifying implicit bias induced by the network via a spectral decomposition of the source term in the original differential equation. Measuring this spectral bias in practice provides guidelines and insights for architecture design.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
Supplementary Material: pdf
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3811
Loading