Abstract: Model interpretation methods play a critical role in enhancing the applicability of time series neural networks in high-risk domains. However, existing model interpretation methods, primarily designed for static data like images, do not yield satisfactory results while dealing with time series data. Although some studies have explored the time dimension and evaluated feature importance at each time point through feature removal, they neglect the potential correlations among multiple features that impact the model’s predictive outcomes. To address this limitation, we introduce the concept of Shapley value into the process of time series model interpretation and propose the TFS (Temporal Feature Sampling) algorithm. This algorithm calculates the importance scores of feature subsets, which include the removed features, during the model interpretation process. Additionally, it models the distribution of features by sampling within the time series data. We conducted comparative experiments between TFS and several baseline methods on two synthetic datasets and one real-world dataset, and the experimental results confirmed the efficiency and performance of our algorithm.
External IDs:doi:10.1007/978-981-97-2303-4_23
Loading