$\mathrm{R}^2$-VOS: Robust Referring Video Object Segmentation via Relational Cycle ConsistencyDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Referring Video Object Segmentation
Abstract: Referring video object segmentation (R-VOS) aims to segment the object masks in a video given a referring linguistic expression to the object. R-VOS introduces human language in the traditional VOS loop to extend flexibility, while all current studies are based on a strict assumption: the object depicted by the expression must exist in the video, namely, the expression and video must have an object-level semantic consensus. This is often violated in real-world applications where an expression can be queried to false videos, and existing methods always fail due to abusing the assumption. In this work, we emphasize that studying semantic consensus is necessary to improve the robustness of R-VOS. Accordingly, we pose an extended task from R-VOS without the semantic consensus assumption, named Robust R-VOS ($\mathrm{R}^2$-VOS). The new task essentially corresponds to the joint modeling of the primary R-VOS problem and its dual (text reconstruction). We embrace the observation that the textual embedding spaces have relational structure consistency in the text-video-text transformation cycle that links the primary and dual problems. We leverage the cycle consistency to consolidate and discriminate the semantic consensus, thus advancing the primary task. We then propose an early grounding module to enable the parallel optimization of the primary and dual problems. To measure the robustness of R-VOS models against unpaired videos and expressions, we construct a new evaluation dataset, $\mathrm{R}^2$-Youtube-VOS. Extensive experiments demonstrate that our method not only identifies negative text-video pairs but also improves the segmentation accuracy for positive pairs with superior disambiguating ability. Our model achieves the state-of-the-art performance on Ref-DAVIS17, Ref-Youtube-VOS, and $\mathrm{R}^2$-Youtube-VOS dataset.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Supplementary Material: zip
Please Choose The Closest Area That Your Submission Falls Into: Applications (eg, speech processing, computer vision, NLP)
5 Replies

Loading