Abstract: Novel category discovery (NCD), which is a challenging and emerging task, aims to cluster unlabelled instances with knowledge information transferred from labelled ones. A majority of recent state-of-the-art methods leverage contrastive learning to model labelled and unlabelled data simultaneously. Nevertheless, they suffer from inaccurate and insufficient positive samples, which are detrimental to NCD and even its generalized class discovery (GCD) setting. To solve this problem, we propose positive-augmented contrastive learning (PACL), which can mine more positive samples and additional pseudo-positive samples, while augmenting the loss cost corresponding to these positive pairs. Consequently, PACL alleviates the imbalance between positive and negative pairs in contrastive learning, and facilitates the knowledge transfer for novel class discovery. In addition, we develop a general feature rectification approach based on PACL to rectify the representation learning achieved by existing NCD or GCD models. Extensive experiments on three datasets exhibit the necessity and effectiveness of our approach for both NCD and GCD tasks, without loss of generality.
Loading