We're Not Using Videos Effectively: An Updated Domain Adaptive Video Segmentation Baseline

Published: 31 Jan 2024, Last Modified: 31 Jan 2024Accepted by TMLREveryoneRevisionsBibTeX
Abstract: There has been abundant work in unsupervised domain adaptation for semantic segmentation (DAS) seeking to adapt a model trained on images from a labeled source domain to an unlabeled target domain. While the vast majority of prior work has studied this as a frame-level Image-DAS problem, a few Video-DAS works have sought to additionally leverage the temporal signal present in adjacent frames. However, Video-DAS works have historically studied a distinct set of benchmarks from Image-DAS, with minimal cross-benchmarking. In this work, we address this gap. Surprisingly, we find that (1) even after carefully controlling for data and model architecture, state-of-the-art Image-DAS methods (HRDA and HRDA+MIC)} outperform Video-DAS methods on established Video-DAS benchmarks (+14.5 mIoU on Viper$\rightarrow$CityscapesSeq, +19.0 mIoU on Synthia$\rightarrow$CityscapesSeq), and (2) naive combinations of Image-DAS and Video-DAS techniques only lead to marginal improvements across datasets. To avoid siloed progress between Image-DAS and Video-DAS, we open-source our codebase with support for a comprehensive set of Video-DAS and Image-DAS methods on a common benchmark. Code available at https://github.com/SimarKareer/UnifiedVideoDA
Certifications: Reproducibility Certification
Submission Length: Regular submission (no more than 12 pages of main content)
Code: https://github.com/SimarKareer/UnifiedVideoDA
Assigned Action Editor: ~Evan_G_Shelhamer1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 1611
Loading