Extending Deep Learning Emulation Across Parameter Regimes to Assess Stochastically Driven Spontaneous Transition Events

Published: 03 Mar 2024, Last Modified: 10 May 2024AI4DiffEqtnsInSci @ ICLR 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Probabilistic Deep Learning, Fine-tuning; Generalisation, Stochastic Dynamics, Fluid Dynamics
TL;DR: We use fine-tuning to generalise a transformer based emulator of stochastic partial differential equations to a range of parameters, applied to a problem in fluid mechanics.
Abstract: Given the computational expense associated with simultaneous multi-task learn- ing, we leverage fine-tuning to generalise a transformer-based neural network emulating a stochastic dynamical system across a range of parameters. Fine-tuning a neural network with a dataset containing a set of parameter values yields a 40-fold reduction in required training size compared to training ab initio training for each new parameter. This facilitates rapid adaptation of the deep learning model, which can be used subsequently across a large range of the parameter space or tailored to a specific regime of study. We demonstrate the model’s ability to capture the relevant behaviour even at interpolated parameter values not seen during training. Applied to a well-researched zonal jet system, the speed-up provided by the deep learning model over numerical integration and the ability to sample from the probabilistic model makes uncertainty quantification in the form of statistical study of rare events in the physical system computationally feasible. Our code is available at https://github.com/Ira-Shokar/Stochastic-Transformer.
Submission Number: 74
Loading