CRAB: Assessing the Strength of Causal Relationships Between Real-world Events

Published: 14 Dec 2023, Last Modified: 31 Jan 2024LLM-CP @ AAAI 2024 OralEveryoneRevisionsBibTeX
Keywords: causal reasoning, benchmark, causal score, event causality
TL;DR: We introduce CRAB, a benchmark for evaluating reasoning capabilities of LLMs about causality between real-world events.
Abstract: Understanding narratives requires reasoning about the cause-and-effect relationships between events mentioned in the text. While existing foundation models yield impressive results in many NLP tasks requiring reasoning, it is unclear whether they understand the complexity of the underlying network of causal relationships of events in narratives. In this work, we present CRAB, a new Causal Reasoning Assessment Benchmark designed to evaluate causal understanding of events in real-world narratives. CRAB contains fine-grained, contextual causality annotations for ~2.7K pairs of real-world events that describe various newsworthy event timelines (e.g., the acquisition of Twitter by Elon Musk). Using CRAB, we measure the performance of several large language models, demonstrating that most systems achieve poor performance on the task. Motivated by classical causal principles, we also analyze the causal structures of groups of events in CRAB, and find that models perform worse on causal reasoning when events are derived from complex causal structures compared to simple linear causal chains. We make our dataset and code available to the research community.
Submission Number: 8
Loading