Towards Inherently Interpretable Deep Learning for Accelerating Scientific Discoveries in Climate Science

Published: 01 Jan 2023, Last Modified: 25 Jan 2025SIGSPATIAL/GIS 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: While deep learning models have high representation power and promising performances, there is often a lack of evidence to interpret potential reasons behind the predictions, which is a major concern limiting their usability for scientific discovery. We propose a Neural Additive Convolutional Neural Network (NA-CNN) to enhance the interpretability of the model to facilitate scientific discoveries in climate science. To investigate the interpretation quality of NA-CNN, we perform experiments on the El Niño identification task where the ground truth for El Niño patterns is known and can be used for validation. Experiment results show that compared to Spatial Attention and state-of-the-art post-hoc explanation techniques, NA-CNN has higher interpretation precision, remarkably improved physical consistency, and reduced redundancy. These qualities provide an encouraging ground for domain scientists to focus their analysis on potentially relevant patterns and derive laws governing phenomena with unknown physical processes.
Loading