Answering Causal Questions with Augmented LLMs

Published: 23 Jun 2023, Last Modified: 13 Jul 2023DeployableGenerativeAIEveryoneRevisions
Keywords: causality, causal llm, augmented language model, tool usage
TL;DR: We augment LLMs with access to an causal expert system to provide answers to causal questions.
Abstract: Large Language Models (LLMs) are revolutionising the way we interact with machines, enabling entirely new applications. An emerging use case for LLMs is to provide a chat interface to complex underlying systems, allowing natural language interaction without the need for the user to learn system specifics. This also allows LLMs to be augmented to perform tasks that they are ill-suited to perform by themselves. One example of this is precise causal reasoning. In this paper, we explore one component in building conversational systems with causal question-answering capabilities. Specifically, we augment LLMs with access to precomputed outputs of a causal expert model to examine their effectiveness at answering causal questions by providing either: 1) the predicted causal graph and related treatment effects to the LLM context; 2) access to an API to derive insights from the output of the causal model. Our experiments show that neither method is able to fully solve the task. However, context-augmented LLMs make significantly more mistakes than the data-access API-augmented LLMs, which are invariant to the size of the causal problem. We believe that the insights generalize to complex reasoning tasks beyond causal reasoning and we hope to inspire further research into building causality-enabled conversational systems.
Submission Number: 60
Loading