Gradient-Based Neural DAG LearningDownload PDF

Published: 21 Oct 2019, Last Modified: 17 Nov 2024NeurIPS 2019 Deep Inverse Workshop PosterReaders: Everyone
Keywords: causality, directed acyclic graph, neural network, constrained optimization
TL;DR: We are proposing a new score-based approach to structure/causal learning leveraging neural networks and a recent continuous constrained formulation to this problem
Abstract: We propose a novel score-based approach to learning a directed acyclic graph (DAG) from observational data. We adapt a recently proposed continuous constrained optimization formulation to allow for nonlinear relationships between variables using neural networks. This extension allows to model complex interactions while being more global in its search compared to other greedy approaches. In addition to comparing our method to existing continuous optimization methods, we provide missing empirical comparisons to nonlinear greedy search methods. On both synthetic and real-world data sets, this new method outperforms current continuous methods on most tasks while being competitive with existing greedy search methods on important metrics for causal inference.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/gradient-based-neural-dag-learning/code)
1 Reply

Loading