everyone
since 09 Apr 2025">EveryoneRevisionsBibTeXCC BY 4.0
Neural operator surrogate models are becoming increasingly popular for material design, where they are used to rapidly evaluate candidate designs. Such surrogate models, however, most commonly either amortize the flow map across initial conditions or the map from a fixed initial condition across design parameters. Neural operator surrogates are fundamentally unsuited for learning across both simultaneously, as the underlying flow map operator of interest varies across system parameters. Towards this end, recent interest has emerged in the use of PDE foundation models that can subsequently be fine-tuned for particular system parameters of interest. Such expensive fine-tuning, however, is rarely feasible in downstream engineering design use cases. To combat this, we instead propose to perform fine-tuning via a generalization of the in-context learning exhibited by transformers to operator spaces. In particular, we propose a generalization of the transformer that handles sequences of functions, dubbed the "OpFormer,"" and formally demonstrate such an architecture is capable of performing in-context learning by implementing a form of gradient descent in an operator RKHS.