Abstract: The rising adoption of AI models in real-world applications characterized by sensor data creates an urgent need for inference explanation mechanisms to support domain experts in making informed decisions. Explainable AI (XAI) opens up a new opportunity to extend black-box deep learning models with such inference explanation capabilities. However, existing XAI approaches for tabular, image, and graph data are ineffective in contexts with spatio-temporal data. In this paper, we fill this gap by proposing a XAI method specifically tailored for spatio-temporal data in sensor networks, where observations are collected at regular time intervals and at different locations. Our model-agnostic masking meta-optimization method for deep learning models uncovers global salient factors influencing model predictions, and generates explanations taking into account multiple analytical views, such as features, timesteps, and node locations. Our qualitative and quantitative experiments with real-world forecasting datasets show that our approach effectively extracts explanations of model predictions, and is competitive with state-of-the-art approaches.
Loading