Keywords: Interpretability, Prototype learning, Generative modelling, Alzheimer's Disease Classification, MRI
TL;DR: A diffusion autoencoder coupled with a contrastive loss is used to produce accurate and interpretable MR image classifications of Alzheimer's Disease.
Abstract: In visual object classification, humans often justify their choices by comparing objects to prototypical examples within that class. We may therefore increase the interpretability of deep learning models by imbuing them with a similar style of reasoning. In this work, we apply this principle by classifying Alzheimer’s Disease based on the similarity of images to training examples within the latent space. We use a contrastive loss combined with a diffusion autoencoder backbone, to produce a semantically meaningful latent space, such that neighbouring latents have similar image-level features. We achieve a classification accuracy comparable to black box approaches on a dataset of 2D MRI images, whilst producing human interpretable model explanations. Therefore, this work stands as a contribution to the pertinent development of accurate and interpretable deep learning within medical imaging.
Submission Number: 51
Loading