Deep Duplex Learning for Weak SupervisionDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Weakly supervised learning, learning with noisy labels, partial label learning, semi-supervised learning.
TL;DR: We propose a deep duplex learning method for general weakly-supervised learning.
Abstract: Weak supervision widely exists in practice and shows various forms such as noisy labels, partial labels, or pseudo labels. As a weak supervisor might provide false training signals, most existing works focus on correcting the supervisor or ignoring certain constraints. While they tackle each type separately, we propose a deep duplex learning (DDL) method to deal with all kinds of weak supervision from a unified perspective of supervision utilization. We exploit both the supervision and counter-supervision signals for training and allow the network to implicitly and adaptively balance the two signals. We describe each image using a duplex representation composed of a superficial representation (SR) and a hypocritical representation (HR). We then impose the supervision signal and counter-supervision signal on SR and HR, respectively. The SR and HR collaborate to interact with the weak supervisor to adaptively confine the effect of false supervisions on the network. Our DDL sets new state-of-the-arts for noisy label learning, partial label learning, and semi-supervised learning on standard benchmarks.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
Supplementary Material: zip
5 Replies

Loading