Keywords: deep learning, explainability, feature importance, IceNet, sea ice concentration (SIC), climate
TL;DR: A study of the impact of XAI based feature reduction in deep neural networks on the use case of the sea ice forecasting model IceNet.
Abstract: With the state-of-the-art IceNet model, deep learning has contributed to an important aspect of climate research by leveraging a range of climate inputs to provide accurate forecasts of Arctic sea ice concentration (SIC).
The deep learning subfield of eXplainable AI (XAI) has gained enormous attention in order to gauge feature importance of neural networks, for instance by leveraging network gradients.
In recent work, an XAI study of the IceNet was conducted, using gradient saliency maps to interrogate its feature importance.
A majority of XAI studies provide information about feature importance as revealed by the XAI method, but rarely provide thorough analysis of effects from reducing the number of input variables.
In this paper, we train versions of the IceNet with drastically reduced numbers of input features according to results of XAI and investigate the effects on the sea ice predictions, on average and with respect to specific events.
Our results provide evidence that the model generally performs better when less features are used, but in case of anomalous events, a larger number of features is beneficial.
We believe our thorough study of the IceNet in terms of feature importance revealed by XAI may give inspiration for other deep learning-based problem scenarios and application domains.
Submission Number: 52
Loading