Keywords: causal representation learning, causal inference, information bottleneck, information theory
TL;DR: We propose the Causal Information Bottleneck (CIB), a method for learning representations suitable for causal tasks.
Abstract: To effectively study complex causal systems, it is often useful to construct representations that simplify parts of the system by discarding irrelevant details while preserving key features.
The Information Bottleneck (IB) method is a widely used approach in representation learning that compresses random variables while retaining information about a target variable.
Traditional methods like IB are purely statistical and ignore underlying causal structures, making them ill-suited for causal tasks.
We propose the Causal Information Bottleneck (CIB), a causal extension of the IB, which compresses a set of chosen variables while maintaining causal control over a target variable.
This method produces representations which are causally interpretable, and which can be used when reasoning about interventions.
We present experimental results demonstrating that the learned representations accurately capture causality as intended.
Supplementary Material: zip
Primary Area: causal reasoning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 5095
Loading