Exploring Interpretability in Deep Learning Prediction of Successful Ablation Therapy for Atrial FibrillationDownload PDF

10 Dec 2021 (modified: 16 May 2023)Submitted to MIDL 2022Readers: Everyone
Keywords: Deep learning, Atrial fibrillation, Cardiac modelling, Interpretability, Medical imaging
Abstract: Radiofrequency catheter ablation (RFCA) therapy is the first-line treatment for atrial fibrillation (AF), the most common type of cardiac arrhythmia globally. However, the procedure currently has low success rates in dealing with persistent AF, with a reoccurrence rate of ∼50% post-ablation. Therefore, deep learning (DL) has increasingly been applied to improve and optimise RFCA treatment for AF. However, for a clinician to trust the prediction of a DL model, the model’s decision process needs to be interpretable and have biomedical relevance. This study explores DL interpretability in predicting the success of RFCA strategies simulated using MRI-derived 2D left atrial (LA) tissue models. The developed DL model had an AUC (area under the receiver operating characteristic curve) of 0.78 ± 0.04 for predicting the success of the pulmonary vein isolation strategy, 0.923 ±0.016 for a fibrosis-based (FIBRO) strategy and 0.77 ±0.02 for a rotor-based strategy (ROTOR). Three feature attribution (FA) map methods were used to investigate interpretability: GradCAM, Occlusions and LIME. GradCAM was found to have the highest percentage of RFCA ablation lesions (known from 2D LA simulations, but unseen by DL model) within informative regions within the FA maps (62% for FIBRO and 71% for ROTOR). Most of the ablation lesions coincided with informative regions of the FA maps (for ROTOR and FIBRO), suggesting that the DL model leveraged structural features of MR images to identify pro-arrhythmogenic regions to make its prediction. In the future, such techniques can help predict the success of RFCA strategies from patient imaging data.
Registration: I acknowledge that publication of this at MIDL and in the proceedings requires at least one of the authors to register and present the work during the conference.
Authorship: I confirm that I am the author of this work and that it has not been submitted to another publication before.
Paper Type: validation/application paper
Primary Subject Area: Interpretability and Explainable AI
Secondary Subject Area: Application: Other
Confidentiality And Author Instructions: I read the call for papers and author instructions. I acknowledge that exceeding the page limit and/or altering the latex template can result in desk rejection.
0 Replies

Loading