Weakly-Supervised Abstraction for Linear Additive Models

Published: 18 Jun 2025, Last Modified: 01 Aug 2025CAR @UAI 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: causal abstraction
Abstract: Causal Abstraction provides a way to summarize complex low-level models into smaller and more interpretable causal models, on which we can perform causal inference more efficiently. Despite pioneering work in learning causal abstractions, most approaches still require significant knowledge of the abstract model, e.g., the abstract graph, joint observational samples, interventional samples, or a map from low-level to abstract interventions. In this paper, we instead focus on the setting with a weak supervision signal: we require that the low-level model is a known linear Additive Noise Model and that we have an initial set of relevant variables, i.e., groups of low-level variables that correspond to an abstract variable. Given these relevant sets, we show that, in general, a consistent abstract model might not be causally sufficient even when the low-level model is causally sufficient. We then study how to extend these initial relevant sets by defining new abstract variables in an unsupervised way to preserve the causal sufficiency of the abstract model. In particular, we focus on identifying the smallest set of variables to add to a user-defined set of relevant variables to guarantee abstract sufficiency. We propose the Relevant Sufficiency Enforcer (RSE) algorithm, a weakly supervised method that, based on an initial set of relevant variables, determines the set of minimal extensions to induce abstract models that preserve causal sufficiency.
Submission Number: 18
Loading