Survival VAE: Robust Local Explanations via Double-Pass Risk Consistency

16 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: XAI, Survival Analysis
Abstract: In the era of advanced machine learning, the need to explain models has grown significantly. One particular domain, survival analysis, has benefited from the rise of deep learning but has lagged behind in the development of methods for explaining risk and survival models. Only a few works have adapted explainable AI methods, such as LIME and SHAP, to survival analysis. Despite these efforts, explaining survival models remains challenging given the complex nature of the data used for survival predictions and the presence of censoring. In this work, we propose a local feature identification method that inherently operates on the instance ordering induced by event and censoring times. It enables faithful, per-sample feature importance by identifying which reconstructed input features preserve consistency in predicted survival risk across a double-pass through the variational autoencoder. Empirical results on the large multi-cohort dataset from The Cancer Genome Atlas demonstrate superior quantitative performance of our method. Qualitatively, analysis of mask weights highlights the biological relevance of the feature selection process. This information can be used to identify new diagnostic markers and treatment targets for cancer patients.
Primary Area: interpretability and explainable AI
Submission Number: 8111
Loading