Keywords: Causal representation learning, Bayesian Inference, latent variable models
TL;DR: A latent variable decoder model, Decoder BCD, is introduced for bayesian causal discovery in latent space and experiments are performed to study the causal discovery problem in unsupervised settings
Abstract: Learning predictors that do not rely on spurious correlations involves building causal representations. However, learning such a representation is very challenging. We, therefore, formulate the problem of learning a causal representation from high dimensional data and study causal recovery with synthetic data. This work introduces a latent variable decoder model, Decoder BCD, for Bayesian causal discovery and performs experiments in mildly supervised and unsupervised settings. We present a series of synthetic experiments to characterize important factors for causal discovery and show that using known intervention targets as labels helps in unsupervised Bayesian inference over structure and parameters of linear Gaussian additive noise latent structural causal models.
Confirmation: Yes
0 Replies
Loading