Keywords: Kidney Ablation, Segmentation, Interpretability, Uncertainty, Deep Learning, CT images
TL;DR: We propose XBoundNet++, a boundary-aware, interpretable, and uncertainty-aware deep learning framework for kidney ablation zone segmentation in ambiguously-labelled CT images, enabling trustworthy AI-assisted clinical decision-making.
Abstract: Abstract. Kidney ablation therapy is a minimally invasive procedure used to treat renal tumours. Evaluating treatment success for planning follow-up care relies on accurate kidney ablation zone (KAZ) segmentation in post-operative CT images. However, manual segmentation is time-consuming and prone to inter-observer variability and traditional segmentation is challenging because ground truth labels only provide a partial estimate of the area of interest. This challenge is prevalent in many interventional and surgical contexts, yet understudied in the medical imaging domain. Therefore, segmenting the area of interest requires careful attention to the specific clinical needs of the resulting deep learning framework, including adding model interpretability and uncertainty estimation for further clinical review. We introduce a deep learning framework, XBoundNet++, that permits (1) precise segmentation of the boundary, (2) detailed attention maps for model layer-wise interpretability, and (3) model uncertainty estimation based on Bayesian Monte-Carlo dropouts and model ensembles. The model was trained and evaluated using a nested 5-fold cross-validation on a local dataset of 76 patients (with 912 CT 2D radial slices), collected at London Health Sciences Centre, which included manually annotated KAZs. Quantitative analysis showed that XBoundNet++ achieved promising segmentation results, including 88% precision, 83% recall, 84% DSC, 74% Jaccard, 6.89-pixel Mean Absolute Distance (MAD), -0.60-pixel Mean Signed Distance (MSD), and a 19.86-pixel Hausdorff distance (HD). Furthermore, heatmaps at each layer, probability and uncertainty maps, and uncertainty estimation at sev-eral thresholds indicates model trustworthiness, confidence, and justification for predictions.
Camera Ready Submission: zip
Submission Number: 13
Loading