Improving Temporal Reasoning of Language Models via Recounted Narratives

ACL ARR 2024 June Submission3405 Authors

16 Jun 2024 (modified: 02 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Reasoning about time and temporal relations is an integral aspect of human cognition, essential for perceiving the world and navigating our experiences. Though language models (LMs) have demonstrated impressive performance in many reasoning tasks, temporal reasoning remains challenging due to its intrinsic complexity. In this work, we first study an essential task of temporal reasoning—temporal graph generation, to unveil LMs’ inherent, global reasoning capabilities. We show that this task presents great challenges even for the most powerful large language models (LLMs), such as GPT-3.5/4. We also notice a significant performance gap by small LMs (< 10B) that lag behind LLMs by 50%. Next, we study how to close this gap with a budget constraint, e.g., not using model finetuning. We propose a new prompting technique tailored for temporal reasoning, GENSORT, that first converts the events set to a Python class, then prompts an LM to generate a temporally grounded narrative, guiding the final generation of a temporal graph. Extensive experiments showcase the efficacy of GENSORT in improving various metrics. Notably, GENSORT attains the highest F1 on the Schema-11 evaluation set, while securing an overall F1 on par with GPT-3.5. GENSORT also achieves the best structural similarity across the board, even compared with GPT-3.5/4.
Paper Type: Long
Research Area: Information Extraction
Research Area Keywords: relation extraction, zero/few-shot extraction
Contribution Types: NLP engineering experiment, Approaches to low-resource settings, Publicly available software and/or pre-trained models
Languages Studied: English
Submission Number: 3405
Loading