Unsupervised Learning of Categorical Structure

27 Sept 2024 (modified: 05 Feb 2025)Submitted to ICLR 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: geometry, interpretability, abstraction, factorization, compositionality, disentanglement, hierarchy
TL;DR: We study the problem of inferring logical variables, and their structure, that are encoded in a continuous representation.
Abstract: Humans are known to reason using logic and abstract categories, and yet most state of the art neural models use continuous distributed representations. These representations offer impressive gradient-based learning capabilities, but it is often difficult to know what symbolic algorithm the network might implicitly be implementing, if any. We find that there are representational geometries that naturally suggest a symbolic structure, which can be expressed in terms of binary components. We show that we can recover this structure by fitting the geometry of this binary embedding to the representational geometry of the original objects. After establishing general facts and providing some intuitions, we present two algorithms that work on low-rank or full-rank data, respectively. We assess their reliability on simulated data, and then use them to interpret neural word embeddings, in which we expect a compositional structure.
Primary Area: applications to neuroscience & cognitive science
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 11368
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview