Leveraging LLM-Generated Structural Prior for Causal Inference with Concurrent Causes

Published: 10 Oct 2024, Last Modified: 29 Nov 2024CaLM @NeurIPS 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: multiple concurrent causes, llm curated structural prior
TL;DR: We utilize LLMs to curate prior information that simplifies causal inference involving many potential concurrent causes.
Abstract: Causal inference with many potential concurrent causes presents significant challenges across various fields, from biomedicine to policy analysis. The core challenge lies in understanding how combinations of potential causes influence an outcome, which becomes exponentially more complex as the number of potential concurrent causes increases. To address this challenge, we propose to incorporate structural prior information that describes the interrelations between causes. Specifically, we use a large language model (LLM) to systematically curate this structural information, effectively reducing the complexity of the causal inference task. We validate our method using both a semi-synthetic dataset and a real-world case study from the film industry.
Submission Number: 34
Loading