Estimating Interventional Distributions with Uncertain Causal Graphs through Meta-Learning

Published: 09 Jun 2025, Last Modified: 13 Jul 2025ICML 2025 Workshop SIM PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: bayesian causal inference, meta-learning, neural processes
TL;DR: We estimate the interventional distribution directly from observational data, bypassing the difficult modelling of intermediate posteriors over causal structures and functions.
Abstract: In scientific domains---from biology to social sciences---many questions boil down to \textit{What effect will we observe if we intervene on a particular variable?} If the causal relationships (e.g.~a causal graph) are known, it is possible to estimate the intervention distributions. Without this domain knowledge, the causal structure must be discovered from available observational data. However, observational data are often compatible with multiple causal graphs, making methods that commit to a single structure prone to overconfidence. A principled way to manage this structural uncertainty is via Bayesian inference, which averages over a posterior distribution of possible causal structures and functional mechanisms. Unfortunately, the number of causal structures grows super-exponentially with the number of nodes in the graph, making computations intractable. We circumvent this intractability by using meta-learning to create an end-to-end model: the Model-Averaged Causal Estimation Transformer Neural Process (MACE-TNP). The model is trained to predict the Bayesian model-averaged interventional posterior distribution, and its end-to-end nature bypasses the need for expensive calculations. Empirically, we show MACE-TNP outperforms strong baselines, establishing meta-learning as a flexible and scalable paradigm for approximating complex Bayesian causal inference.
Submission Number: 10
Loading