Stable Differentiable Causal Discovery

Published: 27 Jun 2024, Last Modified: 20 Aug 2024Differentiable Almost EverythingEveryoneRevisionsBibTeXCC BY 4.0
Keywords: differentiable causal discovery; causal graph; causal discovery; neural networks;
TL;DR: We identify theoretical issues concerning the stability of training in current differentiable causal discovery methods and address these issues with a new method which has improved accuracy and faster convergence.
Abstract: Inferring causal relationships as directed acyclic graphs (DAGs) is an important but challenging problem. Differentiable Causal Discovery (DCD) is a promising approach to this problem, framing the search as a continuous optimization. But existing DCD methods are numerically unstable, with poor performance beyond tens of variables. In this paper, we propose Stable Differentiable Causal Discovery (SDCD), a new method that improves previous DCD methods in two ways: (1) It employs an alternative constraint for acyclicity; this constraint is more stable, both theoretically and empirically, and fast to compute. (2) It uses a training procedure tailored for sparse causal graphs, which are common in real-world scenarios. We first derive SDCD and prove its stability and correctness. We then evaluate it with observational and interventional data and in both small and large scale settings. We find SDCD outperforms existing methods in convergence speed and accuracy, and can scale to thousands of variables.
Submission Number: 55
Loading