Research Area: Alignment
Keywords: Temporal Reasoning; Reinforcement Learning from AI Feedback; Supervised Fine-tuning
TL;DR: Exploring generalized models for temporal reasoning, we find a time-math link. Our framework tackles 38 tasks, creating Timo in 7B/13B sizes. Timo leads base LLMs by 10.0/7.6 scores, becoming the new SOTA.
Abstract: Reasoning about time is essential for Large Language Models (LLMs) to understand the world. Previous works focus on solving specific tasks, primarily on time-sensitive question answering.
While these methods have proven effective, they cannot generalize to a wider spectrum of temporal reasoning tasks.
Therefore, we propose a crucial question: Can we build a universal framework to handle a variety of temporal reasoning tasks?
To that end, we systematically study 38 temporal reasoning tasks.
Based on the observation that 19 tasks are directly related to mathematics, we first leverage the available mathematical dataset to set a solid foundation for temporal reasoning.
However, the in-depth study indicates that focusing solely on mathematical enhancement falls short of addressing pure temporal reasoning tasks. To mitigate this limitation, we propose a simple but effective self-critic temporal optimization method to enhance the model's temporal reasoning capabilities without sacrificing general task abilities.
Finally, we develop Timo, a model designed to excel in temporal reasoning at the 7B and 13B scales. Notably, Timo outperforms the counterpart LLMs by 10.0 and 7.6 in average accuracy scores and achieves the new state-of-the-art (SOTA) performance of comparable size. Extensive experiments further validate our framework's effectiveness and its generalization across diverse temporal tasks. The code is available at https://github.com/zhaochen0110/Timo.
Supplementary Material: zip
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
Author Guide: I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
Submission Number: 669
Loading