Disentanglement Challenge: From Regularization to ReconstructionDownload PDF

Anonymous

15 Nov 2019 (modified: 05 May 2023)NeurIPS 2019 Workshop DC S2 Blind SubmissionReaders: Everyone
Keywords: Disentangled representation, unsupervised learning
TL;DR: disentangled representation learning
Abstract: The challenge of learning disentangled representation has recently attracted much attention and boils down to a competition. Various methods based on variational auto-encoder have been proposed to solve this problem, by enforcing the independence between the representation and modifying the regularization term in the variational lower bound. However recent work by Locatello et al. (2018) has demonstrated that the proposed methods are heavily influenced by randomness and the choice of the hyper-parameter. This work is built upon the same framework in Stage 1 (Li et al., 2019), but with different settings; to make it self-contained, we provide this manuscript, which is unavoidably very similar to the report for Stage 1. In detail, in this work, instead of designing a new regularization term, we adopt the FactorVAE but improve the reconstruction performance and increase the capacity of network and the training step. The strategy turns out to be very effective in achieving disentanglement.
0 Replies

Loading