Neural Interpretable PDEs: Harmonizing Fourier Insights with Attention for Scalable and Interpretable Physics Discovery

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Attention mechanisms have emerged as transformative tools in core AI domains such as natural language processing and computer vision. Yet, their largely untapped potential for modeling intricate physical systems presents a compelling frontier. Learning such systems often entails discovering operators that map between functional spaces using limited instances of function pairs---a task commonly framed as a severely ill-posed inverse PDE problem. In this work, we introduce Neural Interpretable PDEs (NIPS), a novel neural operator architecture that builds upon and enhances Nonlocal Attention Operators (NAO) in both predictive accuracy and computational efficiency. NIPS employs a linear attention mechanism to enable scalable learning and integrates a learnable kernel network that acts as a channel-independent convolution in Fourier space. As a consequence, NIPS eliminates the need to explicitly compute and store large pairwise interactions, effectively amortizing the cost of handling spatial interactions into the Fourier transform. Empirical evaluations demonstrate that NIPS consistently surpasses NAO and other baselines across diverse benchmarks, heralding a substantial leap in scalable, interpretable, and efficient physics learning. Our code and data accompanying this paper are available at https://github.com/fishmoon1234/Nonlocal-Attention-Operator.
Lay Summary: In many physical problems, being able to predict the system responses is not enough. To make sure that the predictions are trustworthy, we are also interested in knowing the mechanism behind these responses. Our work is trying to design an accurate and efficient learning algorithm, which discovers both the physical system responses and the underlying mechanism. The key is a linear attention mechanism together with a learnable kernel network that acts in the Fourier space. We demonstrate the performance of our model, NIPS, on multiple material modeling examples. The model is capable to predict the material deformation on a new and unseen microstructure, and also recover the underlying microstructure.
Link To Code: https://github.com/fishmoon1234/Nonlocal-Attention-Operator
Primary Area: Applications->Chemistry, Physics, and Earth Sciences
Keywords: Neural Operators, Physics Discovery, Attention Mechanism, Inverse PDE problems
Submission Number: 15041
Loading