On the Value of Tokeniser Pretraining in Physics Foundation Models

Published: 01 Mar 2026, Last Modified: 02 Mar 2026AI&PDE PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: physics foundation models, representation learning, pretraining strategies, surrogate models, PDE emulation, transfer learning, autoregressive models, domain adaptation
TL;DR: A systematic investigation of tokeniser pretraining for physics foundation models, demonstrating that benefits depend critically on domain alignment, with in-domain pretraining reducing error by 64%.
Abstract: We investigate the impact of tokeniser pretraining on the accuracy and efficiency of physics emulation. Modern high-resolution simulations produce vast volumes of data spanning diverse physical regimes and scales. Training foundation models to learn the dynamics underlying such data enables the modelling of complex multiphysics phenomena, especially in data-limited settings. The emerging class of physics foundation models typically aims to learn two tasks jointly: (i) extracting compact representations of high-resolution spatiotemporal data, and (ii) capturing governing physical dynamics. However, learning both tasks from scratch simultaneously can impede the effectiveness of either process. We demonstrate that pretraining the tokeniser with an autoencoding objective prior to training the dynamics model enhances computational efficiency for downstream tasks. Notably, the magnitude of this benefit depends on domain alignment: pretraining on the same physical system as the downstream task yields the largest improvements, while pretraining on other systems provides moderate gains. In-domain pretraining reduces VRMSE by 64\% after 10,500 training steps compared to training from scratch. To our knowledge, this is the first systematic investigation of tokeniser pretraining for physics foundation models. We further introduce flexible spatiotemporal compression operations that extend causal convolutions to support runtime-adjustable compression ratios, enabling efficient adaptation to diverse downstream tasks. Our findings provide practical guidance for training efficient physics emulators and highlight the importance of strategic pretraining data selection.
Submission Number: 134
Loading