Test-time Generalization for Physics through Neural Operator Splitting

19 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Neural operators, Zero shot generalization, Test time adaptation, Partial differential equations (PDEs)
TL;DR: We introduce a test-time generalization framework that enables to approximate unseen dynamics as compositions of pretrained neural operators, improving zero-shot generalization without updating parameters.
Abstract: Neural operators have shown promise in learning solution maps of partial differential equations (PDEs), but they often struggle to generalize when test inputs lie outside the training distribution, such as novel initial conditions, unseen PDE coefficients or unseen physics. Prior works addresse this limitation with large scale multi physics pretraining followed by fine tuning, but this still requires examples from the new dynamics, falling short of true zero shot generalization. In this work, we propose a method to enhance generalization at test-time, i.e, without modifying pretrained weights. Building on DISCO, which provides a dictionary of neural operators trained across different dynamics, we introduce a neural operator splitting strategy that, at test time, searches over compositions of training operators to approximate unseen dynamics. On challenging out-of-distribution tasks including parameter extrapolation and novel combinations of physics phenomena, our approach achieves state-of-the-art zero shot generalization results, while being able to recover the underlying PDE parameters. These results underscore test-time computation as a key avenue for building flexible, compositional, and generalizable neural operators.
Supplementary Material: zip
Primary Area: applications to physical sciences (physics, chemistry, biology, etc.)
Submission Number: 19311
Loading