Counterfactual Causal Inference in Natural Language with Large Language Models

Published: 10 Oct 2024, Last Modified: 07 Nov 2024CaLM @NeurIPS 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Causal structure discovery, Counterfactual inference, End-to-end, Large Language Models
TL;DR: We use LLMs to build causal graphs from real-world unstructured natural language and perform counterfactual causal inference end-to-end.
Abstract: Causal structure discovery methods are commonly applied to structured data where the causal variables are known and where statistical testing can be used to assess the causal relationships. By contrast, recovering a causal structure from unstructured natural language data such as news articles contains numerous challenges due to the absence of known variables or counterfactual data to estimate the causal links. Large Language Models (LLMs) have shown promising results in this direction but also exhibit limitations. This work investigates LLM's abilities to build causal graphs from text documents and perform counterfactual causal inference. We propose an end-to-end causal structure discovery and causal inference method from natural language: we first use an LLM to extract the instantiated causal variables from text data and build a causal graph. We merge causal graphs from multiple data sources to represent the most exhaustive set of causes possible. We then conduct counterfactual inference on the estimated graph. The causal graph conditioning allows reduction of LLM biases and better represents the causal estimands. We demonstrate the applicability of our method on real-world news articles.
Submission Number: 19
Loading