Keywords: disentanglement, representation learning, identifiability
TL;DR: We extend the DCI framework for evaluating disentangled representations and connect it to identifiability.
Abstract: In representation learning, a common approach is to seek representations which disentangle the underlying factors of variation. Eastwood & Williams (2018) proposed a framework and three metrics for quantifying the quality of such disentangled representations: disentanglement (D), completeness (C) and informativeness (I). We provide several extensions of this DCI framework by considering the functional capacity required to use a representation. In particular, we establish links to identifiability, point out how D and C can be computed for black-box predictors, and introduce two new measures of representation quality: explicitness (E), derived from a representation's loss-capacity curve, and size (S) relative to the ground truth. We illustrate the relevance of our extensions on the MPI3D-Real dataset.