Learning Physical Operators using Neural Operators
TL;DR: Spatiotemporally continuous Neural PDE models built in a physics-informed regime where neural operators are structured to learn the nonlinear physical operators, integrated in time using a neural ODE.
Abstract: Neural operators have emerged as promising surrogate models for solving partial differential equations (PDEs), but struggle to generalise beyond training distributions and are often constrained to a fixed temporal discretisation. This work introduces a physics-informed training framework that addresses these limitations by decomposing PDEs using operator splitting methods, training separate neural operators to learn individual non-linear physical operators while approximating linear operators with fixed finite-difference convolutions. This modular mixture-of-experts architecture enables generalisation to novel physical regimes by explicitly encoding the underlying operator structure. We formulate the modelling task as a neural ordinary differential equation (ODE) where these learned operators constitute the right-hand side, enabling continuous-in-time predictions through standard ODE solvers and implicitly enforcing PDE constraints. Demonstrated on incompressible and compressible Navier--Stokes equations, our approach achieves superior performance when generalising to unseen physics while remaining parameter-efficient, enables temporal extrapolation beyond training horizons, and provides interpretable components whose behaviour can be verified against known physics.
Submission Number: 1419
Loading