Gradient Alignment in Physics-informed Neural Networks: A Second-Order Optimization Perspective

Published: 18 Sept 2025, Last Modified: 29 Oct 2025NeurIPS 2025 posterEveryoneRevisionsBibTeXCC BY-NC-SA 4.0
Keywords: Scientific Machine Learning, Physics-informed Neural Networks, Partial Differential Equations, Second-order Optimization, Gradient Conflicts
TL;DR: This work identifies gradient conflicts as a key challenge in training PINNs and shows that quasi second-order methods—especially SOAP—effectively resolve them, leading to 2-10x accuracy gains on 10 PDE benchmarks.
Abstract: Physics-informed neural networks (PINNs) have shown significant promise in computational science and engineering, yet they often face optimization challenges and limited accuracy. In this work, we identify directional gradient conflicts during PINN training as a critical bottleneck. We introduce a novel gradient alignment score to systematically diagnose this issue through both theoretical analysis and empirical experiments. Building on these insights, we show that (quasi) second-order optimization methods inherently mitigate gradient conflicts, thereby consistently outperforming the widely used Adam optimizer. Among them, we highlight the effectiveness of SOAP \cite{vyas2024soap} by establishing its connection to Newton’s method. Empirically, SOAP achieves state-of-the-art results on 10 challenging PDE benchmarks, including the first successful application of PINNs to turbulent flows at Reynolds numbers up to 10,000. It yields 2–10x accuracy improvements over existing methods while maintaining computational scalability, advancing the frontier of neural PDE solvers for real-world, multi-scale physical systems. All code and datasets used in this work are publicly available at: \url{https://github.com/PredictiveIntelligenceLab/jaxpi/tree/pirate}. \end{abstract}
Supplementary Material: zip
Primary Area: Machine learning for sciences (e.g. climate, health, life sciences, physics, social sciences)
Submission Number: 11060
Loading