SPASE: Spatial Saliency Explanation For Time Series Models

Published: 01 Jan 2024, Last Modified: 14 Nov 2024ICASSP 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: We have seen recent advances in the fields of Machine Learning (ML), Deep Learning (DL), and Artificial intelligence (AI) that the models are becoming increasingly complex and large in terms of architecture and parameter size. These complex ML/DL models have beaten the state of the art in most fields of computer science like computer vision, NLP, tabular data prediction and time series forecasting, etc. With the increase in models’ performance, model explainability and interpretability has become essential to explain/justify model outcome, especially for business use cases. There has been significant improvement in the domain of model explainability for Computer Vision and Natural Language Processing (NLP) tasks with fundamental research for both black-box and white-box techniques. In this paper, we proposed novel time series explainability techniques SPASE for black-box time series model forecasting and anomaly detection problems.
Loading