Neural Geometry for PDEs: Regularity, Stability, and Convergence Guarantees

Published: 01 Mar 2026, Last Modified: 05 Mar 2026AI&PDE PosterEveryoneRevisionsBibTeXCC BY-NC-ND 4.0
Keywords: Implicit Neural Representation (INR), PDE, Error Estimates
TL;DR: The paper answers when you pose a PDE on a domain represented by INR, how does the INR training error propagate to the solution error.
Abstract: Implicit Neural Representations (INRs) have emerged as a powerful tool for geometric representation, yet their suitability for physics-based simulation remains underexplored. While metrics like Hausdorff distance quantify surface reconstruction quality, they fail to capture the geometric regularity required for provable numerical performance. This work establishes a theoretical framework connecting INR training errors to Partial Differential Equation (PDE) (specifically, linear elliptic equation) solution accuracy. We define the minimal geometric regularity required for INRs to support well-posed boundary value problems and derive a priori error estimates linking the neural network's function approximation error to the finite element discretization error. Our analysis reveals that to match the convergence rate of \textit{linear} finite elements, the INR training loss must scale quadratically relative to the mesh size.
Journal Opt In: No, I do not wish to participate
Submission Number: 99
Loading