Can large language models reason about causal relationships in multimodal time series data?

Published: 10 Oct 2024, Last Modified: 31 Oct 2024CaLM @NeurIPS 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLMs, time series
Abstract: Large Language Models (LLMs) have demonstrated promise in transforming the ways that individuals synthesis and interact with large amounts of information. However, current LLMs are limited in their ability to provide explanations about causal relationships in data. In this paper, we investigate the ability of LLMs to answer queries related to causal relationships within time series data. We generate synthetic datasets based on three distinct directed acyclic graphs (DAGs) representing causal relationships among time series variables. Initially, we use abstract variable names in the analysis and later assign real-world meanings to these variables to align with the DAG structures. Using in-context learning, we present the relationships of these variables to the LLM in the prompt and evaluate how effectively the LLMs identify the variables that caused specific observations in an outcome variable.
Submission Number: 44
Loading