From Causal to Concept-Based Representation Learning

Published: 10 Oct 2024, Last Modified: 31 Oct 2024CaLM @NeurIPS 2024 OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: concept learning, causal representation learning, interpretable representation learning
TL;DR: We formally study how to extract concepts from data, by utilizing ideas from the causal representation learning and interpretability literatures.
Abstract: To build intelligent machine learning systems, there are two broad approaches. One approach is to build inherently interpretable models, as endeavored by the growing field of causal representation learning. The other approach is to build highly-performant foundation models and then invest efforts into understanding how they work. In this work, we relate these two approaches and study how to learn human-interpretable concepts from data. Weaving together ideas from both fields, we formally define a notion of concepts and prove that they can be identifiably recovered from diverse data. Experiments on synthetic data, CLIP models and large language models show the utility of our unified approach.
Submission Number: 7
Loading