Keywords: Scientific Machine Learning, Physics-Informed Neural Networks, Differential Equations
Abstract: Machine learning models can be biased towards the solutions of given differential equations in two principal ways: through regularisation, or through architecture design. Recent research has successfully constrained neural network architectures to satisfy divergence-free fields and Laplace's equation in two dimensions. This work reinterprets these architectures as linear superpositions of general formulated solutions. The notion of superposition is then exploited to develop novel architectures which satisfy both these and novel differential equations. In addition to new architectures for Laplace's equation and divergence-free fields, we propose novel constraints apt for the heat equation, and even some nonlinear differential equations including Burgers' equation. Benchmarks of superposition-based approaches against previously published architectures and physics-informed regularisation approaches are presented. We find that embedding differential equation constraints directly into neural network architectures can lead to improved performance and hope our results motivate further development of neural networks architectures developed to adhere specifically to given differential constraints.
Supplementary Material: zip
Primary Area: applications to physical sciences (physics, chemistry, biology, etc.)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 6835
Loading