Vector-valued Representation is the Key: A Study on Disentanglement and Compositional Generalization

18 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Disentanglement, Combination Generalization
Abstract: Disentanglement and compositional generalization are essential abilities for humans, as they enable rapid knowledge acquisition and generalization to new tasks. These abilities involve recognizing fundamental underlying concepts from observations and generating novel concept combinations. However, deep learning models often struggle with these capabilities. Numerous studies have proposed methods for disentangled representation learning, while recent research has also begun to address compositional generalization. Despite these advancements, the relationship between disentanglement and compositional generalization remains under-explored, with inconsistent findings reported in existing literature. In this paper, we analyze various prominent disentangled representation learning methods, examining their disentanglement and compositional generalization capabilities. Our study reveals a crucial insight: adopting vector-valued representations (using vectors rather than scalars to represent concepts) significantly enhances both disentanglement and compositional generalization performance. This insight resonates with findings from neuroscience research, which suggest that the brain encodes information through the collective activity of neuron populations, rather than relying on individual neurons. Motivated by this observation, we further propose a method to reform the scalar-valued disentanglement works ($\beta$-TCVAE and FactorVAE) to be vector-valued to increase both capabilities. We investigate the impact of the dimensions of vector-valued representation and one important question: whether better disentanglement indicates higher compositional generalization. In summary, our study establishes the feasibility of attaining both effective concept recognition and novel concept composition.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 1087
Loading