TL;DR: We introduce gradient-based explanation methods for survival neural networks, offering improved interpretability, time-dependent insights, and faster, more accurate alternatives for analyzing medical and multi-modal data.
Abstract: Deep learning survival models often outperform classical methods in time-to-event predictions, particularly in personalized medicine, but their "black box" nature hinders broader adoption. We propose a framework for gradient-based explanation methods tailored to survival neural networks, extending their use beyond regression and classification. We analyze the implications of their theoretical assumptions for time-dependent explanations in the survival setting and propose effective visualizations incorporating the temporal dimension. Experiments on synthetic data show that gradient-based methods capture the magnitude and direction of local and global feature effects, including time dependencies. We introduce GradSHAP(t), a gradient-based counterpart to SurvSHAP(t), which outperforms SurvSHAP(t) and SurvLIME in a computational speed vs. accuracy trade-off. Finally, we apply these methods to medical data with multi-modal inputs, revealing relevant tabular features and visual patterns, as well as their temporal dynamics.
Lay Summary: Deep learning survival models can help predict how long it takes for events (e.g., a disease) to occur, but their "black box" nature makes them hard to understand and trust. We developed new tools to explain how these models make time-based predictions for individual patients, showing which factors matter and how their influence changes over time. Our method, GradSHAP(t), gives faster and more accurate explanations than existing approaches. Tests on synthetic and real medical data show it helps to uncover important features and their time dynamics, making deep learning survival models more transparent and usable in healthcare.
Link To Code: https://github.com/bips-hb/Survival-XAI-ICML
Primary Area: Applications->Health / Medicine
Keywords: Deep Learning, Survival Analysis, Explainable Artificial Intelligence, Interpretable Machine Learning, XAI, IML, Feature Attribution
Submission Number: 16311
Loading