- Keywords: Multimodal Learning, Generative Learning, VAE
- TL;DR: We apply a method for multi modal generative learning on the Mimic-CXR database and highlight strengths and weaknesses with respect to medical data.
- Abstract: Machine Learning has become more and more popular in the medical domain over the past years. While supervised machine learning has already been applied successfully, the vast amount of unlabelled data offers new opportunities for un- and self-supervised learning methods. Especially with regard to the multimodal nature of most clinical data, the labelling of multiple data types becomes quickly infeasible in the medical domain. However, to the best of our knowledge, multimodal unsupervised methods have been tested extensively on toy-datasets only but have never been applied to real-world medical data, for direct applications such as disease classification and image generation. In this article, we demonstrate that self-supervised methods provide promising results on medical data while highlighting that the task is extremely challenging and that there is space for substantial improvements.
- Paper Type: validation/application paper
- Primary Subject Area: Unsupervised Learning and Representation Learning
- Secondary Subject Area: Application: Other
- Paper Status: original work, not submitted yet
- Source Code Url: https://github.com/Jimmy2027/MoPoE-MIMIC
- Data Set Url: https://physionet.org/content/mimic-cxr/2.0.0/
- Registration: I acknowledge that publication of this at MIDL and in the proceedings requires at least one of the authors to register and present the work during the conference.
- Authorship: I confirm that I am the author of this work and that it has not been submitted to another publication before.