Keywords: Explainable artificial intelligence (XAI), Interpretable machine learning, Interpretability, Deep learning, Time Series Analysis, Segmentation, End-to-end, Self-explaining models, Physiological signals, Photoplethysmogram (PPG), Electrocardiogram (ECG), Obstructive sleep apnea (OSA), Atrial fibrillation (AF), Heart rate variability (HRV), Blood pressure (BP)
TL;DR: We propose a generalized self-explaining multi-view deep learning architecture, that generates task-relevant human-interpretable representations during model inference, for stratifying health information from physiological signals.
Abstract: Explainable artificial intelligence (XAI) offers enhanced transparency by revealing key features, relationships, and patterns within the input data that drive model decisions. In healthcare and clinical applications, where physiological signals serve as inputs to the models for decision making, such transparency is critical for facilitating analysis of inference causality, ensuring reliability, identifying biases, and uncovering new insights. In this work, we introduce a self-explaining multi-view deep learning architecture, that generates task-relevant and human-interpretable masks, attributing feature importance during model inference for stratifying key information from input signals. We implement the 2-view version of the proposed architecture for three clinically-relevant regression and classification tasks related to cardiovascular health, involving electrocardiogram (ECG) or photoplethysmogram (PPG) signals. Experimental results demonstrate that the complementary masks, self-generated by our proposed architecture, outperform well-established post-hoc methods (LIME and SHAP), both qualitatively and quantitatively in explainability. Furthermore, the 2-view model offers task-level performance comparable to or better than the state-of-the-art methods, displaying its broad applicability across various cardiovascular-related tasks. Overall, the proposed method offers new directions for interpretable machine learning and data-driven analysis of cardiovascular signals, envisioning self-explaining models for clinical applications.
Primary Area: interpretability and explainable AI
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 12242
Loading