How to deal with missing data in supervised deep learning?Download PDF

Published: 06 Jul 2020, Last Modified: 05 May 2023ICML Artemiss 2020Readers: Everyone
TL;DR: joint DLVM and discriminative model leads to performance gains in deep supervised learning with missing values
Keywords: missing data, supervised learning, deep learning
Abstract: The issue of missing data in supervised learning has been largely overlooked, especially in the deep learning community. We investigate strategies to adapt neural architectures to handle missing values. Here, we focus on regression and classification problems where the features are assumed to be missing at random. Of particular interest are schemes that allow to reuse as-is a neural discriminative architecture. One scheme involves imputing the missing values with learnable constants. We propose a second novel approach that leverages recent advances in deep generative modelling. More precisely, a deep latent variable model can be learned jointly with the discriminative model, using importance-weighted variational inference in an end-to-end way. This hybrid approach, which mimics multiple imputation, also allows to impute the data, by relying on both the discriminative and the generative model. We also discuss ways of using a pre-trained generative model to train the discriminative one. In domains where powerful deep generative models are available, the hybrid approach leads to large performance gains.
2 Replies

Loading