Keywords: Neural operators, adaptive computation, partial differential equations, test-time refinement, adaptive mesh refinement, scientific machine learning, physics-informed learning, computational efficiency
TL;DR: We introduce ATCA, a framework that adaptively allocates test-time refinement compute across spatial regions in neural PDE solvers, achieving 2.5–4.3× FLOP savings or 18–37% lower error compared to uniform refinement.
Abstract: Neural PDE solvers achieve remarkable speedups over classical numerical methods but apply uniform computational effort across the spatial domain at inference time, ignoring the inherent non-uniformity of solution complexity. We introduce ATCA (Adaptive Test-time Compute Allocation), a framework that learns to allocate variable refinement depth across the spatial domain during inference. ATCA trains a lightweight difficulty estimator network that predicts local solution error from an initial neural operator prediction, then routes computational budget to high-difficulty regions via spatially non-uniform iterative refinement. We establish a theoretical connection between ATCA and classical adaptive mesh refinement, proving that optimal compute allocation under a fixed budget satisfies an error equidistribution principle analogous to Dörfler marking. Experiments on four benchmark PDEs (Burgers’ equation, Darcy flow, Navier-Stokes at varying Reynolds numbers, and shallow water equations) demonstrate that ATCA achieves equivalent accuracy to uniform refinement methods such as PDE-Refiner while using 2.5–4.3× fewer FLOPs at inference, or alternatively achieves 18–37% lower error at equal compute budgets. Our results suggest that test-time compute scaling for neural PDE solvers, analogous to recent findings in large language models, benefits substantially from adaptive rather than uniform allocation strategies.
Journal Opt In: Yes, I want to participate in the IOP focus collection submission
Journal Corresponding Email: benrossjenkins@gmail.com
Submission Number: 106
Loading