Neural Superposition Networks

10 May 2025 (modified: 29 Oct 2025)Submitted to NeurIPS 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: differential equations, physics-informed neural networks, scientific machine learning, differentially constrained architecture, principle of superposition
Abstract: We introduce _Neural Superposition Networks_, a class of physics-constrained neural architectures that exactly satisfy given partial differential equations (PDEs) by construction. In contrast to traditional physics-informed neural networks (PINNs), which enforce PDE constraints via loss regularization, our approach embeds the solution manifold directly into the architecture by expressing the output as a superposition of analytical basis functions that solve the target PDE. This eliminates the need for interior residual loss terms, simplifies training to a single-objective optimization on boundary conditions, and improves numerical stability. We show that for linear PDEs—including Laplace, heat, and incompressible flow constraints—this architectural bias leads to provably convergent approximations. Using maximum principles and classical convergence theory, we establish uniform boundary-to-interior convergence guarantees. For nonlinear PDEs such as Burgers’ equation, we demonstrate that partial structural constraints can still be enforced via transformations (e.g., Cole–Hopf), yielding improved inductive bias over standard PINNs. The resulting networks combine the expressiveness of deep learning with the convergence guarantees of Galerkin and spectral methods. Our framework offers a theoretically grounded and computationally efficient alternative to residual-based training for PDE-constrained problems..
Supplementary Material: zip
Primary Area: Machine learning for sciences (e.g. climate, health, life sciences, physics, social sciences)
Submission Number: 15527
Loading