Abstract: Video Moment Retrieval (VMR) is a challenging cross-modal retrieval task that aims to retrieve the most relevant moment from an untrimmed video via a given language query. In this task, cross-modal semantics should be thoroughly comprehended and supervisory signal of limited annotations should be efficiently mined. Toward this end, we develop a Context Alignment Network (CAN) to tackle VMR by modeling and aligning cross-modal contexts. First, we employ fine-grained fusion to preserve rich low-level information and conduct complementary local-global context modeling to translate low-level information into high-level semantics. Second, we propose a novel context alignment learning to utilize additional context alignment supervision during training. The intuitive motivation is that contextual information around the predicted moment boundaries should be similar to that of the ground truth moment boundaries. Therefore, we define the alignment degree of boundary contexts between video moments as a proxy measure of their temporal overlap. By minimizing the context alignment loss, the model is driven to learn a context-level alignment relationship between moment boundaries. We find context alignment learning is effective to improve the retrieval accuracy by exploiting context alignment as additional supervisory signal. Extensive experiments show that CAN attains competitive performance compared with state-of-the-arts on Charades-STA and TACoS datasets, demonstrating the effectiveness of our proposed method.
0 Replies
Loading