Adaptive Knowledge Transfer for Generalized Category Discovery

22 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: general category discovery, novel class discovery, knowledge transfer
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: We tackle the general category discovery problem, which aims to discover novel classes in unlabeled datasets by leveraging the information of known classes. Most previous works transfer knowledge implicitly from known classes to novel ones through shared representation spaces. However, the implicit nature of knowledge transfer in these methods poses difficulties in controlling the flow of information between known and novel classes. Furthermore, it is susceptible to the label uncertainty of unlabeled data learning. To overcome these limitations, our work introduces an explicit and adaptive knowledge transfer framework that can facilitate novel class discovery. This framework can be dissected into three primary steps. The initial step entails obtaining representations of known class knowledge. This is achieved through a pre-trained known-class model. The subsequent step is to transform the knowledge representation to enable more targeted knowledge transfer, realized through an adapter layer and a channel selection matrix. The final step is knowledge distillation, where we maximize the mutual information between two representation spaces. Furthermore, we introduce a challenge benchmark iNat21 which is comprised of three distinct difficulty levels. We conduct extensive experiments on various benchmark datasets and the results demonstrate the superiority of our approach over the previous state-of-the-art methods.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 5777
Loading