Keywords: DNN-based PDE solvers, SGD, continuous modeling, error estimates
Abstract: Deep neural network-based PDE solvers have shown remarkable promise for tackling high-dimensional partial differential equations, yet their training dynamics and error behavior are not well understood.
This paper develops a unified continuous-time framework based on stochastic differential equations to analyze the noisy regularized stochastic gradient descent algorithm when applied to deep PDE solvers.
Our approach establishes weak error between this algorithm and its continuous approximation, and provides new asymptotic error characterizations via invariant measures.
Importantly, we overcome the restrictive global Lipschitz continuity loss gradient, making our theory more applicable to practical deep networks.
Specifically, our study focuses on general second-order elliptic PDEs; however, the proposed framework is not limited to this specific form and can be extended in principle to broader classes of PDEs.
Furthermore, we conduct systematic experiments to reveal how stochasticity affects solution accuracy and the stability domains of optimizers.
Our results indicate that stochasticity can have varying impacts on the stability of solutions near different local minima; therefore, in practical training, strategies should be dynamically adjusted according to the local optimization landscape to enhance robustness and stability of neural PDE solvers.
Supplementary Material: zip
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 16448
Loading