MultiPDENet: PDE-embedded Learning with Multi-time-stepping for Accelerated Flow Simulation

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: Introduced a PDE-embedded learning model with multi-time-stepping for accelerated simulation of flows.
Abstract: Solving partial differential equations (PDEs) by numerical methods meet computational cost challenge for getting the accurate solution since fine grids and small time steps are required. Machine learning can accelerate this process, but struggle with weak generalizability, interpretability, and data dependency, as well as suffer in long-term prediction. To this end, we propose a PDE-embedded network with multiscale time stepping (MultiPDENet), which fuses the scheme of numerical methods and machine learning, for accelerated simulation of flows. In particular, we design a convolutional filter based on the structure of finite difference stencils with a small number of parameters to optimize, which estimates the equivalent form of spatial derivative on a coarse grid to minimize the equation's residual. A Physics Block with a 4th-order Runge-Kutta integrator at the fine time scale is established that embeds the structure of PDEs to guide the prediction. To alleviate the curse of temporal error accumulation in long-term prediction, we introduce a multiscale time integration approach, where a neural network is used to correct the prediction error at a coarse time scale. Experiments across various PDE systems, including the Navier-Stokes equations, demonstrate that MultiPDENet can accurately predict long-term spatiotemporal dynamics, even given small and incomplete training data, e.g., spatiotemporally down-sampled datasets. MultiPDENet achieves the state-of-the-art performance compared with other neural baseline models, also with clear speedup compared to classical numerical methods
Lay Summary: Simulating complex physical systems, like fluid flows, using traditional numerical methods is computationally expensive. These methods require very detailed grids and tiny time steps to be accurate. Machine learning (ML) can speed things up but often struggles with making reliable long-term predictions, explaining how it works, and needing huge amounts of training data. To overcome these challenges, we developed MultiPDENet. This new approach cleverly combines numerical methods with machine learning by embedding the core physics equations (PDEs) directly into the model's design. It uses efficient, physics-inspired filters based on numerical stencils to calculate crucial spatial changes accurately on much coarser grids, significantly reducing computational effort. For time integration, a dedicated "Physics Block" employs a precise numerical method at a fine time scale. Crucially, to prevent small errors from accumulating over long predictions, MultiPDENet uses multiscale time stepping: a neural network corrects prediction errors at a significantly coarser time scale. Tested on challenging systems like fluid dynamics, MultiPDENet achieves highly accurate long-term predictions even when trained on very limited or sparse data (e.g., data missing in space or time). It outperforms other ML models in accuracy and provides a clear speedup compared to standard numerical methods, offering a powerful tool for faster scientific simulations.
Application-Driven Machine Learning: This submission is on Application-Driven Machine Learning.
Primary Area: Applications->Chemistry, Physics, and Earth Sciences
Keywords: physics-informed learning, multiscale time stepping, spatiotemporal dynamics, PDEs
Submission Number: 6847
Loading