Keywords: Test-time adaptation, neural surrogates, feature alignment, generative design optimization, distribution shift, simulation
Abstract: Machine learning is increasingly used in engineering to accelerate costly simulations and enable end-to-end design optimization, mapping inputs (e.g., initial conditions, parameters or meshes) to simulation results or design candidates. However, large models are often pretrained on datasets generated with assumptions (e.g., geometries or configurations) that may not hold at test time, resulting in significant performance degradation. TTA mitigates such distribution shifts by leveraging inputs, online and at test-time. It avoids the need for costly re-training and doesn't require access to ground truth labels. In this work we propose Stable Adaptation at Test-Time for Simulation (SATTS), a novel method to improve performance of neural surrogates at deployment. It excels in high-dimensional settings through stable feature alignment and self-calibration, by leveraging latent covariance structures. To the best of our knowledge, this is the first study of TTA in the context of simulation surrogates and generative design optimization.
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 18315
Loading