Counterfactual-Consistency Prompting for Relative Temporal Understanding in Large Language Models

ACL ARR 2024 December Submission1648 Authors

16 Dec 2024 (modified: 05 Feb 2025)ACL ARR 2024 December SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Despite the advanced capabilities of large language models (LLMs), their temporal reasoning ability remains underdeveloped. Prior works have highlighted this limitation, particularly in maintaining temporal consistency when understanding event relations. For example, models often confuse mutually exclusive temporal relations like “before” and “after” between events and make inconsistent predictions. In this work, we tackle the issue of temporal inconsistency in LLMs by proposing a novel counterfactual prompting approach. Our method generates counterfactual questions and enforces collective constraints, enhancing the model’s consistency. We evaluate our method on multiple datasets, demonstrating significant improvements in event ordering for both explicit and implicit events, as well as in temporal commonsense understanding, by effectively addressing temporal inconsistencies.
Paper Type: Short
Research Area: Information Extraction
Research Area Keywords: Event extraction,Reasoning,Prompting
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 1648
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview