BWLer: Barycentric Weight Layer Elucidates a Precision-Conditioning Tradeoff for PINNs

Published: 28 Jun 2025, Last Modified: 28 Jun 2025TASC 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: physics-informed neural networks, barycentric interpolation, spectral methods, high precision, scientific machine learning, partial differential equations, polynomial approximation, numerical accuracy
TL;DR: Motivated by spectral methods, we replace neural networks in PINNs with barycentric interpolants and elucidate a precision-conditioning tradeoff; using our model, we achieve machine precision on three benchmark PDEs.
Abstract: Physics-informed neural networks (PINNs) offer a flexible way to solve partial differential equations (PDEs) with machine learning, yet they still fall well short of the machine-precision accuracy many scientific tasks demand. This motivates an investigation into whether the precision ceiling comes from the ill-conditioning of the PDEs themselves or from the typical multi-layer perceptron (MLP) architecture. We introduce the Barycentric Weight Layer (BWLer), which models the PDE solution through barycentric polynomial interpolation. A BWLer can be added on top of an existing MLP (a BWLer-hat) or replace it completely (explicit BWLer), cleanly separating how we represent the solution from how we take its derivatives for the physics loss. Using BWLer, we identify fundamental precision limitations within the MLP: on a simple 1-D interpolation task, even MLPs with $O(10^5)$ parameters stall around $10^{-8}$ relative error -- about eight orders above float64 machine precision -- before any PDE terms are added. In PDE learning, adding a BWLer lifts this ceiling and exposes a tradeoff between achievable accuracy and the conditioning of the PDE loss. For linear PDEs we fully characterize this tradeoff with an explicit error decomposition and navigate it during training with spectral derivatives and preconditioning. Across five benchmark PDEs, adding a BWLer on top of an MLP improves $\ell_2$ relative error by up to $30\times$ for convection, $10\times$ for reaction, and $1800\times$ for wave equations while remaining compatible with first-order optimizers. Replacing the MLP entirely lets an explicit BWLer reach near-machine-precision on convection, reaction, and wave problems (up to 10 billion times better than prior results) and match the performance of standard PINNs on stiff Burgers’ and irregular-geometry Poisson problems. Together, these findings point to a practical path for combining the flexibility of PINNs with the precision of classical spectral solvers.
Submission Number: 6
Loading