CORE: Towards Scalable and Efficient Causal Discovery with Reinforcement Learning

Published: 16 Jun 2024, Last Modified: 16 Jun 2024CORR, CVPR 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Causal Discovery, Reinforcement Learning, Meta Learning
Abstract: Causal discovery is the challenging task of inferring causal structure from data. Motivated by Pearl’s Causal Hierarchy (PCH), which tells us that passive observations alone are not enough to distinguish correlation from causation, there has been a recent push to incorporate interventions into machine learning research. Reinforcement learning provides a convenient framework for such an active approach to learning. This paper presents CORE, a deep reinforcement learning-based approach for causal discovery and intervention planning. CORE learns to sequentially reconstruct causal graphs from data while learning to perform informative interventions. Our results demonstrate that CORE generalizes to unseen graphs and efficiently uncovers causal structures. Furthermore, CORE scales to larger graphs with up to 10 variables and outperforms existing approaches in structure estimation accuracy and sample efficiency.
Submission Number: 3
Loading