Keywords: Causal Representation Learning, Causal Abstraction, Interpretability, Causality
Abstract: Why does a phenomenon occur? Addressing this question is central to most scientific inquiries and often relies on simulations of scientific models. As models become more intricate, deciphering the causes behind phenomena in high-dimensional spaces of interconnected variables becomes increasingly challenging. Causal Representation Learning (CRL) offers a promising avenue to uncover interpretable causal patterns within these simulations through an interventional lens. However, developing general CRL frameworks suitable for practical applications remains an open challenge. We introduce _Targeted Causal Reduction_ (TCR), a method for condensing complex intervenable models into a concise set of causal factors that explain a specific target phenomenon. We propose an information theoretic objective to learn TCR from interventional data of simulations, establish identifiability for continuous variables under shift interventions and present a practical algorithm for learning TCRs. Its ability to generate interpretable high-level explanations from complex models is demonstrated on toy and mechanical systems, illustrating its potential to assist scientists in the study of complex phenomena in a broad range of disciplines.
List Of Authors: Keki\'c, Armin and Sch\"olkopf, Bernhard and Besserve, Michel
Latex Source Code: zip
Signed License Agreement: pdf
Code Url: https://github.com/akekic/targeted-causal-reduction
Submission Number: 409
Loading