Amortising the Gap between Pre-training and Fine-tuning for Video Instance Segmentation

21 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Video instance segmentation, Instance segmentation, Augmentation, Pseudo Video
Abstract: Video Instance Segmentation (VIS) development heavily relies on fine-tuning pre-trained models initially trained on images. However, there is often a significant gap between the pre-training on images and fine-tuning for video, which needs to be noticed. In order to effectively bridge this gap, we present a novel approach known as ``\textit{video pre-training}'' to achieve substantial improvements in VIS. Notably, our approach has enhanced performance on complex video datasets involving intricate instance relationships. Our primary contribution is minimizing disparities between the pre-training and fine-tuning stages at both the data and modeling levels. Specifically, we introduce the concept of consistent pseudo-video augmentations to enrich data diversity while maintaining instance prediction consistency across both stages. Additionally, at the modeling level for pre-training, we incorporate multi-scale temporal modules to enhance the model's understanding of temporal aspects, allowing it to better adapt to object variations and facilitate contextual integration. One of the strengths of our approach is its flexibility, as it can be seamlessly integrated into various segmentation methods, consistently delivering performance improvements. Across prominent VIS benchmarks, our method consistently outperforms all state-of-the-art methods. For instance, when using a ResNet-50 as a backbone, our approach achieves a remarkable 4.0\% increase in average precision (AP) on the most challenging VIS benchmark, OVIS, setting a new record. The code will be made available soon.
Primary Area: representation learning for computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3165
Loading