Can Large Language Models Distinguish Cause from Effect?Download PDF

Published: 09 Jul 2022, Last Modified: 05 May 2023CRL@UAI 2022 PosterReaders: Everyone
Keywords: large language models, GPT, post-hoc analysis, causal and anticausal relations
TL;DR: Our paper conducts a post-hoc analysis to check whether large language models can be used to distinguish cause from effect.
Abstract: Identifying the causal direction between two variables has long been an important but challenging task for causal inference. Existing work proposes to distinguish whether X->Y or Y->X by setting up an input-output learning task using the two variables, since causal and anticausal learning have different performance under semi-supervised learning and domain shift. This approach works for many task-specific models trained on the input-output pairs. However, with the rise of general-purpose large language models (LLMs), there are various challenges posed to this previous task-specific learning approach, since continued training of LLMs is less likely to be affordable for university labs, and LLMs are no longer trained on specific input-output pairs. In this work, we propose a new paradigm to distinguish cause from effect using LLMs. Specifically, we conduct post-hoc analysis using natural language prompts that describe different possible causal stories behind the X, Y pairs, and test their zero-shot performance. Through the experiments, we show that the natural language prompts that describe the same causal story as the ground-truth data generating direction achieve the highest zero-shot performance, with 2% margin over anticausal prompts. We highlight that it will be an interesting direction to identify more causal relations using LLMs. Our code and data are at https://github.com/cogito233/llm-bivariate-causal-discovery
5 Replies

Loading