Interpreting Latent Spaces of Generative Models for Medical Images using Unsupervised MethodsDownload PDF

10 Dec 2021 (modified: 11 Aug 2024)Submitted to MIDL 2022Readers: Everyone
Keywords: Generative models, unsupervised learning, interpretability, computed tomography
TL;DR: Discovering interpretable directions that are semantically meaningful in the latent spaces of deep generative models for medical images using unsupervised methods.
Abstract: Generative models such as Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs) play an increasingly important role in medical image analysis. They are used to synthesize, de-noise, super-resolve, and augment medical images. The latent spaces of these models often show semantically meaningful directions corresponding to human-interpretable image transformations. However, until now, their exploration for medical images has been limited due to the requirement of supervised data. Recently, several methods for unsupervised discovery of interpretable directions in GAN latent spaces have shown interesting results on natural images. This work explores the potential of applying these techniques on medical images by training a deep convolutional GAN and a VAE on thoracic CT scans and using an unsupervised method to discover interpretable directions in the resulting latent space. We find several directions corresponding to non-trivial image transformations, such as rotation or breast size, as well as directions showing that the generative models capture 3D structure despite being presented only with two-dimensional data. The results show that unsupervised methods to discover interpretable directions in generative model latent spaces generalize to VAEs and can be applied to medical images. This could open a wide array of future work using these methods in medical image analysis.
Registration: I acknowledge that publication of this at MIDL and in the proceedings requires at least one of the authors to register and present the work during the conference.
Authorship: I confirm that I am the author of this work and that it has not been submitted to another publication before.
Paper Type: validation/application paper
Primary Subject Area: Unsupervised Learning and Representation Learning
Secondary Subject Area: Interpretability and Explainable AI
Confidentiality And Author Instructions: I read the call for papers and author instructions. I acknowledge that exceeding the page limit and/or altering the latex template can result in desk rejection.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/interpreting-latent-spaces-of-generative/code)
0 Replies

Loading