A Variational approach to Physics Informed Neural Network for Stochastic Partial Differential Equations

16 Sept 2025 (modified: 12 Feb 2026)ICLR 2026 Conference Desk Rejected SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Neural Network, Variational Inference, Steady State Heat Equation, Physics-Informed Neural Network, Partial Differential Equations, Gaussian Process
Abstract: Physics-Informed Neural Networks (PINNs) refer to a deep learning approach to represent the spatial and temporal characteristics of a distributed physical phenomenon, such as thermal fields, using Neural Networks (NNs). The loss function used to train PINNs relies on a term that penalises any violation of the Partial Differential Equation (PDE) that governs the distributed phenomena, in addition to a least-squares-error penalisation of any available observations in the relevant domain. PINNs have been shown to be successful in approximating the solutions to PDEs and have proven effective even in the low data regime. A critical shortcoming is the lack of a systematic treatment of the uncertainty of the approximation. State-of-the-art approaches rely on Bayesian NNs, though these are computationally heavy, both during training and at inference, nor does the associated training procedure derive from first principles. To remedy these limitations, we propose Variational Inference PINNs (VI-PINNs). Our approach derives from first principles: the uncertainty intrinsic to the distributed phenomena is explained by adopting Stochastic PDEs, while relying on standard measurement uncertainty to explain the observational uncertainty. This leads to the formulation of a posterior probability for the distributed phenomenon. Drawing parallels with Bayesian inference in finite spaces and relying on VI techniques to circumvent the otherwise intractable posterior. We derive a training objective that allows us to train two NNs, representing the mean and covariance of the approximation, respectively. The solution may be interpreted as a Bayesian belief about the true distributed phenomenon. Importantly, in the limit, the original PINN framework is recovered. We compare our approach with Bayesian PINNs (B-PINNs). Our results suggest that VI-PINNs are easier to implement, have a lower training time, and yields results that better align with reality, especially when extrapolating outside the measurement range.
Primary Area: probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
Submission Number: 7710
Loading