Interpretable Medical Image Classification with Self-Supervised Anatomical Embedding and Prior KnowledgeDownload PDF

Published: 11 May 2021, Last Modified: 10 Nov 2024MIDL 2021 PosterReaders: Everyone
Keywords: interpretebility, label-efficient learning, self-supervised, landmark detection, clinical knowledge, contrast phase, CT
TL;DR: We use self-supervised embeddings to detect contrast-related anatomical landmarks in CT, and then use clinical prior knowledge to classify the contrast phase.
Abstract: In medical image analysis tasks, it is important to make machine learning models focus on correct anatomical locations, so as to improve interpretability and robustness of the model. We adopt a latest algorithm called self-supervised anatomical embedding (SAM) to locate point of interest (POI) on computed tomography (CT) scans. SAM can detect arbitrary POI with only one labeled sample needed. Then, we can extract targeted features from the POIs to train a simple prediction model guided by clinical prior knowledge. This approach mimics the practice of human radiologists, thus is interpretable, controllable, and robust. We illustrate our approach on the application of CT contrast phase classification and it outperforms an existing deep learning based method trained on the whole image.
Paper Type: both
Primary Subject Area: Application: Radiology
Secondary Subject Area: Interpretability and Explainable AI
Paper Status: original work, not submitted yet
Source Code Url: Source code releasing is not approved at this point due to company policy.
Data Set Url: Dataset is in-house.
Registration: I acknowledge that publication of this at MIDL and in the proceedings requires at least one of the authors to register and present the work during the conference.
Authorship: I confirm that I am the author of this work and that it has not been submitted to another publication before.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/interpretable-medical-image-classification/code)
4 Replies

Loading