Chain-of-Influence: Tracing Interdependencies Across Time and Features in Clinical Predictive Modeling
Keywords: Deep Learning, Interpretable Deep Learning, Explainable AI, XAI, Attention, Clinical Time-Series, Feature Interaction, Healthcare, Predictive Modeling
TL;DR: We developed an AI model that traces a "chain of influence" in patient data to explain exactly how clinical measurements impact each other over time to predict a health outcome.
Abstract: Modeling clinical time-series data is hampered by the challenge of capturing latent, time-varying dependencies among features. State-of-the-art approaches often rely on black-box mechanisms or simple aggregation, failing to explicitly model how the influence of one clinical variable propagates through others over time. We propose $\textbf{Chain-of-Influence (CoI)}$, an interpretable deep learning framework that constructs an explicit, time-unfolded graph of feature interactions. CoI enables the tracing of influence pathways, providing a granular audit trail that shows how any feature at any time contributes to the final prediction, both directly and through its influence on other variables. We evaluate CoI on mortality and disease progression tasks using the MIMIC-IV dataset and a chronic kidney disease cohort. Our framework achieves state-of-the-art predictive performance (AUROC of 0.960 on CKD progression and 0.950 on ICU mortality), with deletion-based sensitivity analyses confirming that CoI's learned attributions faithfully reflect its decision process. Through case studies, we demonstrate that CoI uncovers clinically meaningful, patient-specific patterns of disease progression, offering enhanced transparency into the temporal and cross-feature dependencies that inform clinical decision-making.
Primary Area: interpretability and explainable AI
Submission Number: 2190
Loading