CT++: Complementary Co-Training for Semi-Supervised Semantic Segmentation

24 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: semi-supervised learning, semantic segmentation
TL;DR: Propose a complementary co-training framework for semi-supervised semantic segmentation to enlarge the model discrepancy
Abstract: With limited annotations, semi-supervised semantic segmentation aims to enhance the segmentation ability through abundant unlabeled images. Among recent trends, co-training is gaining increasing popularity, where two parallel models produce pseudo labels for each other. The success of co-training heavily relies on the discrepancy of peer models. To achieve this, prior works mostly leverage different initializations in decoders. Unfortunately, the two models still quickly converge to an extremely coupling state, making co-training downgrade to poorer self-training. To address this dilemma, we present our CT++, decoupling dual Co-Training models from two novel perspectives. First, we propose to construct complementary feature-level views. We design two co-training models to utilize disjoint and complementary sets of features for decoding. Apart from complementary features, we further seek complementary input views for the two models to learn respectively. Our two complementary principles enlarge the model discrepancy significantly, enabling co-training models to transfer distinct knowledge to each other and broaden their capability. This contributes to remarkably boosted co-training effectiveness. Extensive studies on Pascal, Cityscapes, COCO, and ADE20K exhibit the strong superiority of our method, e.g., 80.2% mIoU with only 92 labels on Pascal.
Primary Area: representation learning for computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 8819
Loading