Abstract: Causal discovery is an important problem in many sciences that enables us to estimate
causal relationships from observational data. Particularly, in the healthcare domain, it
can guide practitioners in making informed clinical decisions. Several causal discovery
approaches have been developed over the last few decades. The success of these approaches
mostly relies on a large number of data samples. In practice, however, an infinite amount of
data is never available. Fortunately, often we have some prior knowledge available from the
problem domain. Particularly, in healthcare settings, we often have some prior knowledge
such as expert opinions, prior RCTs, literature evidence, and systematic reviews about the
clinical problem. This prior information can be utilized in a systematic way to address
the data scarcity problem. However, most of the existing causal discovery approaches
lack a systematic way to incorporate prior knowledge during the search process. Recent
advances in reinforcement learning techniques can be explored to use prior knowledge as
constraints by penalizing the agent for their violations. Therefore, in this work, we propose
a framework KCRL that utilizes the existing knowledge as a constraint to penalize the
search process during causal discovery. This utilization of existing information during causal
discovery reduces the graph search space and enables a faster convergence to the optimal
causal mechanism. We evaluated our framework on benchmark synthetic and real datasets
as well as on a real-life healthcare application. We also compared its performance with
several baseline causal discovery methods. The experimental findings show that penalizing
the search process for constraint violation yields better performance compared to existing
approaches that do not utilize prior knowledge.
0 Replies
Loading