Transformer for Partial Differential Equations’ Operator Learning

Published: 06 May 2023, Last Modified: 06 May 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Data-driven learning of partial differential equations' solution operators has recently emerged as a promising paradigm for approximating the underlying solutions. The solution operators are usually parameterized by deep learning models that are built upon problem-specific inductive biases. An example is a convolutional or a graph neural network that exploits the local grid structure where functions' values are sampled. The attention mechanism, on the other hand, provides a flexible way to implicitly exploit the patterns within inputs, and furthermore, relationship between arbitrary query locations and inputs. In this work, we present an attention-based framework for data-driven operator learning, which we term Operator Transformer (OFormer). Our framework is built upon self-attention, cross-attention, and a set of point-wise multilayer perceptrons (MLPs), and thus it makes few assumptions on the sampling pattern of the input function or query locations. We show that the proposed framework is competitive on standard PDE benchmark problems and can flexibly be adapted to different types of grids.
Submission Length: Regular submission (no more than 12 pages of main content)
Code: https://github.com/BaratiLab/OFormer
Assigned Action Editor: ~Tie-Yan_Liu1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 722
Loading