Measuring Orthogonality in Representations of Generative Models

Published: 17 Oct 2024, Last Modified: 17 Oct 2024Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: In unsupervised representation learning, models aim to distill essential features from high-dimensional data into lower-dimensional learned representations, guided by inductive biases. Understanding the characteristics that make a good representation remains a topic of ongoing research. Disentanglement of independent generative processes has long been credited with producing high-quality representations. However, focusing solely on representations that adhere to the stringent requirements of most disentanglement metrics, may result in overlooking many high-quality representations, well suited for various downstream tasks. These metrics often demand that generative factors be encoded in distinct, single dimensions aligned with the canonical basis of the representation space. Motivated by these observations, we propose two novel metrics: Importance-Weighted Orthogonality (IWO) and Importance-Weighted Rank (IWR). These metrics evaluate the mutual orthogonality and rank of generative factor subspaces. Throughout extensive experiments on common downstream tasks, over several benchmark datasets and models, IWO and IWR consistently show stronger correlations with downstream task performance than traditional disentanglement metrics. Our findings suggest that representation quality is closer related to the orthogonality of independent generative processes rather than their disentanglement, offering a new direction for evaluating and improving unsupervised learning models.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Camera Ready Version
Code: https://github.com/cyrusgeyer/iwo
Assigned Action Editor: ~David_Fouhey2
Submission Number: 2883
Loading