AutoQ-VIS: Improving Unsupervised Video Instance Segmentation via Automatic Quality Assessment

Published: 27 Aug 2025, Last Modified: 01 Oct 2025LIMIT 2025 OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Video Instance Segmentation; Unsupervised Learning; Segmentation Quality Assessment;
TL;DR: We propose AutoQ-VIS, an unsupervised video instance segmentation framework that eliminates manual labeling via automatic quality assessment and self-training loops.
Abstract: Video Instance Segmentation (VIS) faces significant annotation challenges due to its dual requirements of pixel-level masks and temporal consistency labels. While recent unsupervised methods like VideoCutLER eliminate optical flow dependencies through synthetic data, they remain constrained by the synthetic-to-real domain gap. We present AutoQ-VIS, a novel unsupervised framework that bridges this gap through quality-guided self-training. Our approach establishes a closed-loop system between pseudo-label generation and automatic quality assessment, enabling progressive adaptation from synthetic to real videos. Experiments demonstrate state-of-the-art performance with 52.6 $\text{AP}\_{50}$ on YouTubeVIS-2019 $\texttt{val}$ set, surpassing the previous state-of-the-art VideoCutLER by 4.4%, while requiring no human annotations. This demonstrates the viability of quality-aware self-training for unsupervised VIS. The source code of our method is available at https://github.com/wcbup/AutoQ-VIS.
Submission Number: 25
Loading