Keywords: Deep Learning, AI for Science, Neural Operator, Partial Differential Equation
TL;DR: We systematically analyze NOs from a unified perspective, considering the orthogonal bases in their kernel operators.
Abstract: Neural operators (NOs) have become popular for learning partial differential equation (PDE) operators. As a mapping between infinite-dimensional function spaces, each layer of NO contains a kernel operator and a linear transform, followed by nonlinear activation. NO can accurately simulate the operator and conduct super-resolution, i.e., train and test on grids with different resolutions. Despite its success, NO's design of kernel operator, choice of grids, the capability of generalization and super-resolution, and applicability to general problems on irregular domains are poorly understood.
To this end, we systematically analyze NOs from a unified perspective, considering the orthogonal bases in their kernel operators. This analysis facilitates a better understanding and enhancement of NOs in the following:
(1) Generalization bounds of NOs,
(2) Construction of NOs on arbitrary domains,
(3) Enhancement of NOs' performance by designing proper orthogonal bases that align with the operator and domain,
(4) Improvement of NOs' through the allocation of suitable grids, and
(5) Investigation of super-resolution error.
Our theory has multiple implications in practice: choosing the orthogonal basis and grid points to accelerate training, improving the generalization and super-resolution capabilities, and adapting NO to irregular domains.
Corresponding experiments are conducted to verify our theory. Our paper provides a new perspective for studying NOs.
Supplementary Material: pdf
Submission Number: 4908
Loading