Abstract: Modelling and understanding time remains a challenge
in contemporary video understanding models. With language emerging as a key driver towards powerful generalization, it is imperative for foundational video-language
models to have a sense of time. In this paper, we consider
a specific aspect of temporal understanding: consistency of
time order as elicited by before/after relations. We establish that seven existing video-language models struggle to
understand even such simple temporal relations. We then
question whether it is feasible to equip these foundational
models with temporal awareness without re-training them
from scratch. Towards this, we propose a temporal adaptation recipe on top of one such model, VideoCLIP, based on
post-pretraining on a small amount of video-text data. We
conduct a zero-shot evaluation of the adapted models on
six datasets for three downstream tasks which require varying degrees of time awareness. We observe encouraging
performance gains especially when the task needs higher
time awareness. Our work serves as a first step towards
probing and instilling a sense of time in existing videolanguage models without the need for data and computeintense training from scratch.
0 Replies
Loading