Semi-supervised Multiple Instance Learning using Variational Auto-EncodersDownload PDF

09 Dec 2021 (modified: 16 May 2023)Submitted to MIDL 2022Readers: Everyone
Keywords: Multiple-Instance Learning, Variational Autoencoders, Deep Generative Models
TL;DR: We extend the MIL classification problem to learning a joint distribution in the semi-supervised setting. We propose a latent variable model for the MIL generative model with a shared parameterization between the classifier and the unsupervised part
Abstract: We consider the multiple-instance learning (MIL) paradigm, which is a special case of supervised learning where training instances are grouped into bags. In MIL, the hidden instance labels do not have to be the same as the label of the comprising bag. On the other hand, the hybrid modelling approach is known to possess advantages basically due to the smooth consolidation of both discriminative and generative components. In this paper, we investigate whether we can get the best of both worlds (MIL and hybrid modelling), especially in a semi-supervised learning (SSL) setting. We first integrate a variational autoencoder (VAE), which is a powerful deep generative model, with an attention-based MIL classifier, then evaluate the performance of the resulting model in SSL. We assess the proposed approach on an established benchmark as well as a real-world medical dataset.
Registration: I acknowledge that publication of this at MIDL and in the proceedings requires at least one of the authors to register and present the work during the conference.
Authorship: I confirm that I am the author of this work and that it has not been submitted to another publication before.
Paper Type: methodological development
Primary Subject Area: Unsupervised Learning and Representation Learning
Secondary Subject Area: Application: Other
Confidentiality And Author Instructions: I read the call for papers and author instructions. I acknowledge that exceeding the page limit and/or altering the latex template can result in desk rejection.
0 Replies

Loading