Keywords: multiple concurrent causes, llm curated structural prior
TL;DR: We utilize LLMs to curate prior information that simplifies causal inference involving many potential concurrent causes.
Abstract: Causal inference with many potential concurrent causes presents significant challenges across various fields, from biomedicine to policy analysis. The core challenge lies in understanding how combinations of potential causes influence an outcome, which becomes exponentially more complex as the number of potential concurrent causes increases. To address this challenge, we propose to incorporate structural prior information that describes the interrelations between causes. Specifically, we use a large language model (LLM) to systematically curate this structural information, effectively reducing the complexity of the causal inference task. We validate our method using both a semi-synthetic dataset and a real-world case study from the film industry.
Submission Number: 34
Loading