Abstract: As a significant yet challenging multimedia task, Video Moment Retrieval (VMR) targets to retrieve the specific moment corresponding to a sentence query from an untrimmed video. Although recent respectable works have made remarkable progress in this task, they implicitly are rooted in the closed-set assumption that all the given queries as video-relevant1. Given a video-irrelevant OOD query in open-set scenarios, they still utilize it for wrong retrieval, which might lead to irrecoverable losses in high-risk scenarios, e.g., criminal activity detection. To this end, we creatively explore a brand-new VMR setting termed Open-Set Video Moment Retrieval (OS-VMR), where we should not only retrieve the precise moments based on ID query, but also reject OOD queries. In this paper, we make the first attempt to step toward OS-VMR and propose a novel model OpenVMR, which first distinguishes ID and OOD queries based on the normalizing flow technology, and then conducts moment retrieval based on ID queries. Specifically, we first learn the ID distribution by constructing a normalizing flow, and assume the ID query distribution obeys the multi-variate Gaussian distribution. Then, we introduce an uncertainty score to search the ID-OOD separating boundary. After that, we refine the ID-OOD boundary by pulling together ID query features. Besides, video-query matching and frame-query matching are designed for coarse-grained and fine-grained cross-modal interaction, respectively. Finally, a positive-unlabeled learning module is introduced for moment retrieval. Experimental results on three challenging datasets demonstrate the effectiveness of our OpenVMR.
Primary Subject Area: [Content] Vision and Language
Relevance To Conference: Video Moment Retrieval (VMR) is a challenging yet crucial task in multimedia information retrieval, which has attracted significant attention in recent years due to its vast potential applications in areas such as multimedia retrieval and human activity retrieval.
Submission Number: 992
Loading