A Preliminary Study of Disentanglement With Insights on the Inadequacy of MetricsDownload PDF

Anonymous

04 Sept 2019 (modified: 05 May 2023)NeurIPS 2019 Workshop DC S1 Blind SubmissionReaders: Everyone
Keywords: Disentanglement, Representation Learning, Total Correlation, Factorization
TL;DR: Inadequacy of Disentanglement Metrics
Abstract: Disentangled encoding is an important step towards a better representation learning. However, despite the numerous efforts, there still is no clear winner that captures the independent features of the data in an unsupervised fashion. In this work we empirically evaluate the performance of six unsupervised disentanglement approaches on the mpi3d toy dataset curated and released for the NeurIPS 2019 Disentanglement Challenge. The methods investigated in this work are Beta-VAE, Factor-VAE, DIP-I-VAE, DIP-II-VAE, Info-VAE, and Beta-TCVAE. The capacities of all models were progressively increased throughout the training and the hyper-parameters were kept intact across experiments. The methods were evaluated based on five disentanglement metrics, namely, DCI, Factor-VAE, IRS, MIG, and SAP-Score. Within the limitations of this study, the Beta-TCVAE approach was found to outperform its alternatives with respect to the normalized sum of metrics. However, a qualitative study of the encoded latents reveal that there is not a consistent correlation between the reported metrics and the disentanglement potential of the model.
0 Replies

Loading