Data-Efficient Operator Learning via Unsupervised Pretraining and In-Context Learning

Published: 03 Mar 2024, Last Modified: 04 May 2024AI4DiffEqtnsInSci @ ICLR 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: scientific machine learning, unsupervised pretraining, neural operators, foundation models
TL;DR: We introduce unsupervised pretraining and in-context learning for Scientific Machine Learning, significantly enhancing data efficiency and generalizability of neural operators.
Abstract: Recent years have witnessed the promise of coupling machine learning methods and physical domain-specific insight for solving scientific problems based on partial differential equations (PDEs). However, being data-intensive, these methods still require a large amount of PDE data. This reintroduces the need for expensive numerical PDE solutions, partially undermining the original goal of avoiding these expensive simulations. In this work, seeking data efficiency, we design unsupervised pretraining and in-context learning methods for PDE operator learning. To reduce the need for training data with simulated solutions, we pretrain neural operators on unlabeled PDE data using reconstruction-based proxy tasks. To improve out-of-distribution (OOD) generalization, we further assist neural operators in flexibly leveraging in-context OOD examples, without incurring extra training costs or designs. Extensive empirical evaluations on a diverse set of PDE equations demonstrate that our method is highly data-efficient, more generalizable, and even outperforms conventional vision-pretrained models.
Submission Number: 14
Loading