Keywords: Physics-Informed Neural Networks, Deep learning, Feature learning, Partial differential equations, Scientific machine learning
Abstract: Recent advances in Physics-Informed Neural Networks (PINNs) have deployed fully-connected multi-layer deep learning architectures to solve partial differential equations (PDEs). Such architectures, however, struggle to reduce prediction error below $O(10^{-5})$, even with substantial network sizes and prolonged training periods. Methods that reduce error further, to $O(10^{-7})$, generally come with high computational costs. This paper introduces a $\textbf{S}$ingle-layered $\textbf{A}$rchitecture with $\textbf{F}$eature $\textbf{E}$ngineering (SAFE-NET) that reduces the error by orders of magnitude using far fewer parameters, challenging the prevailing belief that modern PINNs are effectively learning features in these scientific applications. Our strategy is to return to basic ideas in machine learning: with random Fourier features, a simplified single-layer network architecture, and an effective optimizer we call $(\text{Adam} + \text{L-BFGS})^2$, SAFE-NET accelerates training and improves accuracy by reducing the number of parameters and improving the conditioning of the PINN optimization problem. Our numerical results reveal that SAFE-NET converges faster, matches or generally exceeds the performance of more complex optimizers and multi-layer networks, improves problem conditioning throughout training, and works robustly across many problem settings. On average, SAFE-NET uses an order of magnitude fewer parameters than a conventional PINN setup to reach comparable results in 20 percent as many epochs, each of which is 50 percent faster. Our findings suggest that conventional deep learning architectures for PINNs waste their representational capacity, failing to learn simple but effective features.
Primary Area: other topics in machine learning (i.e., none of the above)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 12815
Loading