Once Upon a ${\it Time}$ in ${\it Graph}$: Relative-Time Pretraining for Complex Temporal Reasoning

Published: 07 Oct 2023, Last Modified: 01 Dec 2023EMNLP 2023 MainEveryoneRevisionsBibTeX
Submission Type: Regular Long Paper
Submission Track: Question Answering
Keywords: Temporal Question Answering, Time-aware Pre-training
TL;DR: We devise a graph view on temporally-scoped events and thus propose a time-aware pre-training method for complex temporal reasoning.
Abstract: Our physical world is constantly evolving over time, rendering challenges for pre-trained language models to understand and reason over the temporal contexts of texts. Existing work focuses on strengthening the direct association between a piece of text and its time-stamp. However, the knowledge-time association is usually insufficient for the downstream tasks that require reasoning over temporal dependencies between knowledge. In this work, we make use of the underlying nature of time, all temporally-scoped sentences are strung together through a one-dimensional time axis, and suggest creating a graph structure based on the relative placements of events along the time axis. Inspired by the graph view, we propose \textsc{RemeMo} ($\underline{Re}lative Ti\underline{me} \underline{Mo}deling$), which explicitly connects all temporally-scoped facts by modeling the time relations between any two sentences. Experimental results show that \textsc{RemeMo} outperforms the baseline T5 on multiple temporal question answering datasets under various settings. Further analysis suggests that \textsc{RemeMo} is especially good at modeling long-range complex temporal dependencies.
Submission Number: 5481
Loading