Lost in Latent Space: Examining failures of disentangled models at combinatorial generalisationDownload PDF

Published: 31 Oct 2022, Last Modified: 03 Jan 2023NeurIPS 2022 AcceptReaders: Everyone
Keywords: Combinatorial Generalisation, Disentanglement, Generative Models, Representation Learning
TL;DR: Exploring the reasons for success and failures of generalisation of disentangled models
Abstract: Recent research has shown that generative models with highly disentangled representations fail to generalise to unseen combination of generative factor values. These findings contradict earlier research which showed improved performance in out-of-training distribution settings when compared to entangled representations. Additionally, it is not clear if the reported failures are due to (a) encoders failing to map novel combinations to the proper regions of the latent space, or (b) novel combinations being mapped correctly but the decoder is unable to render the correct output for the unseen combinations. We investigate these alternatives by testing several models on a range of datasets and training settings. We find that (i) when models fail, their encoders also fail to map unseen combinations to correct regions of the latent space and (ii) when models succeed, it is either because the test conditions do not exclude enough examples, or because excluded cases involve combinations of object properties with it's shape. We argue that to generalise properly, models not only need to capture factors of variation, but also understand how to invert the process that causes the visual stimulus.
Supplementary Material: pdf
20 Replies

Loading