Utilising Explainability techniques in Medical Imaging

MICCAI 2024 MEC Submission13 Authors

18 Aug 2024 (modified: 20 Aug 2024)MICCAI 2024 MEC SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: explainable ai, medical imaging
Abstract: The goal of this tutorial is to implement explainable AI (XAI) techniques that are crucial for understanding and interpreting the decisions made by complex deep learning models, particularly in medical imaging. In medical imaging, the transparency of AI models is critical for building trust with healthcare professionals and ensuring the safe and effective use of AI-driven diagnostics. This tutorial will guide you through implementing two fundamental XAI techniques: Gradient-weighted Class Activation Mapping (Grad-CAM) and SHapley Additive exPlanations (SHAP). These methods provide visual and feature-level insights into model predictions, helping to demystify the "black box" nature of AI models.
Website: https://colab.research.google.com/drive/1aSOZiqr7ZVq51XxazvK8ikuKy9UmJHa2?usp=sharing
Submission Number: 13
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview