Representation Learning as Finding Necessary and Sufficient CausesDownload PDF

Published: 21 Jul 2022, Last Modified: 05 May 2023SCIS 2022 PosterReaders: Everyone
Keywords: Causal inference, representation learning, non-spuriousness, disentanglement, probabilities of causation
Abstract: Representation learning constructs low-dimensional representations to summarize essential features of high-dimensional data. This learning problem is often approached by describing various desiderata associated with learned representations; e.g., that they be non-spurious or efficient. It can be challenging, however, to turn these intuitive desiderata into formal criteria that can be measured and enhanced based on observed data. In this paper, we take a causal perspective on representation learning, formalizing non-spuriousness and efficiency (in supervised representation learning) using counterfactual quantities and observable consequences of causal assertions. This yields computable metrics that can be used to assess the degree to which representations satisfy the desiderata of interest and learn non-spurious representations from single observational datasets.
Confirmation: Yes
0 Replies

Loading