Stochastic Sparse Sampling: A Framework for Local Explainability in Variable-Length Medical Time Series

Published: 10 Oct 2024, Last Modified: 26 Nov 2024NeurIPS 2024 TSALM WorkshopEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Time Series, Healthcare, Explainable AI, Epilepsy, Neuroscience
TL;DR: We introduce a novel framework, Stochastic Sparse Sampling (SSS), for variable-length time series classification with local explainability.
Abstract: While the majority of time series classification research has focused on modeling fixed-length sequences, variable-length time series classification (VTSC) remains underexplored, despite its relevance in healthcare and various other real-world applications. Existing finite-context methods, such as Transformer-based architectures, require noisy input processing when applied to VTSC, while infinite-context methods, including recurrent neural networks, struggle with information overload over longer sequences. Furthermore, current state-of-the-art (SOTA) methods lack explainability and generally fail to provide insights for local signal regions, reducing their reliability in high-risk scenarios. To address these issues, we introduce Stochastic Sparse Sampling (SSS), a novel framework for explainable VTSC. SSS manages variable-length sequences by sparsely sampling fixed windows to compute localized predictions, which are then aggregated to form a final prediction. We apply SSS on the task of seizure onset zone (SOZ) localization, a critical VTSC problem requiring identification of seizure-inducing brain regions from variable-length electrophysiological time series. We evaluate SSS on the Epilepsy iEEG Multicenter Dataset, a heteregeneous collection of intracranial electroencephalography (iEEG) recordings, and achieve performance comparable to current SOTA methods, while enabling localized visual analysis of model predictions.
Submission Number: 94
Loading