Scalable Transformer for PDE Surrogate Modeling

Published: 21 Sept 2023, Last Modified: 12 Jan 2024NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: Efficient attention, Neural PDE solver
TL;DR: We propose a multidimensional factorized attention mechanism that alleviates the curse of dimensionality when applying to PDE problems.
Abstract: Transformer has shown state-of-the-art performance on various applications and has recently emerged as a promising tool for surrogate modeling of partial differential equations (PDEs). Despite the introduction of linear-complexity attention, applying Transformer to problems with a large number of grid points can be numerically unstable and computationally expensive. In this work, we propose Factorized Transformer (FactFormer), which is based on an axial factorized kernel integral. Concretely, we introduce a learnable projection operator that decomposes the input function into multiple sub-functions with one-dimensional domain. These sub-functions are then evaluated and used to compute the instance-based kernel with an axial factorized scheme. We showcase that the proposed model is able to simulate 2D Kolmogorov flow on a $256\times 256$ grid and 3D smoke buoyancy on a $64\times64\times64$ grid with good accuracy and efficiency. The proposed factorized scheme can serve as a computationally efficient low-rank surrogate for the full attention scheme when dealing with multi-dimensional problems.
Submission Number: 13468
Loading