Physics-Informed Neural Operators with Multi-Method Explainability for Ocean Temperature Prediction

Published: 11 Nov 2025, Last Modified: 23 Dec 2025XAI4Science Workshop 2026EveryoneRevisionsBibTeXCC BY 4.0
Track: Regular Track (Page limit: 6-8 pages)
Keywords: Physics-Informed Neural Networks,Explainable AI,Scientific Machine Learning,Ocean Temperature Prediction
Abstract: Accurate ocean temperature prediction is critical for climate modeling and marine ecosystem management, yet current machine learning approaches often lack physical consistency and interpretability. Traditional physics-based models provide interpretable predictions but struggle with computational efficiency, while modern deep learning achieves impressive accuracy but lacks explainability. However, existing neural operator studies primarily focus on synthetic datasets without systematic comparison of physics-informed constraints and multi-method interpretability validation on real observational data. We present a comprehensive study comparing four neural architectures (DeepONet, physics-informed DeepONet, FNO, LSTM) for ocean temperature prediction from real Argo float measurements. Our framework incorporates heat equation constraints through physics-informed loss functions and applies four complementary XAI methods (Integrated Gradients, Saliency Maps, DeepLIFT, GradientSHAP) with rigorous statistical validation through 10-seed Deep Ensembles. Physics-informed DeepONet achieves the best performance with 17.6\% error reduction (RMSE: 0.718$\pm$0.026$^{\circ}$C vs. 0.871$\pm$0.051$^{\circ}$C, p<0.001) and 48\% variance reduction, demonstrating that PDE constraints meaningfully enhance neural operators on real-world data. Multi-method XAI analysis reveals that pressure and salinity measurements dominate predictions (attribution magnitudes 0.3--0.7, indicating primary predictive importance) across all architectures, with physics constraints increasing salinity sensitivity by 6.3\%, while layer conductance shows LSTM's final layer exhibits 60$\times$ higher importance than intermediate layers. Our results establish a methodological framework combining physics-informed operator learning with comprehensive explainability for trustworthy scientific machine learning applications.
Supplementary Material: zip
Submission Number: 15
Loading