Abstract: Natural-gradient methods markedly accelerate the training of Physics-Informed Neural Networks (PINNs), yet their Gauss–Newton update must normally be solved in the parameter space, incurring a prohibitive $\mathcal{O}(n^{3})$ time complexity, where $n$ is the number of network weights. We show that exactly the same step can instead be formulated in a generally smaller residual space of size $m=\sum_{\gamma}N_{\gamma}d_{\gamma}$, where each residual class $\gamma$ (e.g. PDE interior, boundary, initial data) contributes $N_{\gamma}$ collocation points of output dimension $d_{\gamma}$.
Building on this insight, we introduce Dual Natural Gradient Descent (D-NGD). D-NGD computes the Gauss–Newton step in residual space, augments it with a geodesic-acceleration correction at negligible extra cost, and provides both a dense direct solver for modest $m$ and a Nyström-preconditioned conjugate-gradient solver for larger $m$.
Experimentally, D-NGD scales second-order PINN optimisation to networks with up to 12.8 million parameters, delivers one- to three-order-of-magnitude lower final $L^{2}$ error than first-order (Adam, SGD) and quasi-Newton methods, and—crucially—enables full natural-gradient training of PINNs at this scale on a single GPU.
Submission Length: Long submission (more than 12 pages of main content)
Assigned Action Editor: Jean Kossaifi
Submission Number: 4972
Loading