Keywords: Neural operator, partial differential equation, conservation law, adaptive correction
Abstract: Physical laws, such as the conversation of mass and momentum, are fundamental principles in many physical systems. Neural operators have achieved promising performance in learning the solutions to those systems, but they often fail to ensure conversation.
Existing methods typically enforce conservation via hand-crafted post-processing or architectural constraints, leading to limited model flexibility and adaptability. In this work, we propose a novel adaptive correction approach to ensure the conservation of fundamental quantities for neural operator outputs. Our method introduces a lightweight learnable operator to adaptively enforce the target conservation law during training. This mechanism allows the model to flexibly and adaptively correct its outputs while guaranteeing conservation. We provide a theoretical guarantee showing that neural operators with our correction method can potentially achieve lower reconstruction loss than their conservation-constrained counterparts. Our method is evaluated across multiple neural operator architectures and representative PDEs. Extensive experiments show that incorporating our correction method into baseline models significantly improves both accuracy and stability. In addition, the experimental results demonstrate that our approach consistently achieves superior performance over widely used conservation-enforcement techniques on various PDE benchmarks.
Primary Area: learning on time series and dynamical systems
Submission Number: 5839
Loading