ASNO: An Interpretable Attention-Based Spatio-Temporal Neural Operator for Robust Scientific Machine Learning

Published: 01 Jul 2025, Last Modified: 01 Jul 2025ICML 2025 R2-FM Workshop PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: spatio-temporal neural operator, uncertainty quantification, scientific machine learning, out-of-distribution generalization, interpretability, attention mechanism, PDE modeling
Abstract: Scientific machine learning (SciML) aims to model complex physical processes from data, but a key challenge lies in capturing long-range spatio-temporal dependencies while generalizing across varying environmental conditions. This is especially critical in real-world applications such as additive manufacturing, where machine parameters are controlled but environmental factors fluctuate unpredictably. Traditional models often learn temporal dynamics but lack physical interpretability and robustness to distributional shifts. To address these limitations, we introduce the Attention-based Spatio-Temporal Neural Operator (ASNO), a novel architecture that decouples temporal and spatial modeling using separable attention mechanisms. ASNO is inspired by the implicit–explicit decomposition of the backward differentiation formula (BDF): it employs a Transformer encoder to forecast homogeneous temporal dynamics and a nonlocal attention-based neural operator (NAO) to integrate spatial interactions and external forcing. This design enhances interpretability by isolating contributions of historical states and external fields, while enabling zero-shot generalization to new physical regimes. Experiments on standard SciML benchmarks—including chaotic ODEs, PDEs, and additive manufacturing—demonstrate that ASNO consistently outperforms baseline models in accuracy, stability, and generalizability, making it a promising framework for interpretable and adaptive physics-informed learning.
Submission Number: 149
Loading