Keywords: high dimensional causal discovery, score matching, scalability
TL;DR: We demonstrate how to discover the whole causal graph from the second derivative of the log-likelihood in observational non-linear additive Gaussian noise models. We use this information make causal discovery scalable in the number of nodes
Abstract: This paper demonstrates how to discover the whole causal graph from the second derivative of the log-likelihood in observational non-linear additive Gaussian noise models. Leveraging scalable machine learning approaches to approximate the score function $\nabla \operatorname{log}p(\mathbf{X})$, we extend the work of Rolland et al. (2022) that only recovers the topological order from the score and requires an expensive pruning step removing spurious edges among those admitted by the ordering.
Our analysis leads to DAS (acronym for Discovery At Scale), a practical algorithm that reduces the complexity of the pruning by a factor proportional to the graph size. In practice, DAS achieves competitive accuracy with current state-of-the-art while being over an order of magnitude faster. Overall, our approach enables principled and scalable causal discovery, significantly lowering the compute bar.
Supplementary Material: zip
0 Replies
Loading