Abstract: Image classification in real-world applications is a challenging task due to the lack of labeled data. Many few-shot learning techniques have been developed to tackle this problem. However, existing few-shot learning techniques will fail if new classes are added to the model without any labeled data (out-of-distribution). The existing techniques will classify the new classes as one of the existing classes and will not be able to detect them as new classes. Also, if all data are unlabeled, existing few-shot learning techniques will not work (because existing techniques are supervised learning). This article proposes a novel few-shot learning network [knowledge transfer network (KTNet)] that can learn from unlabeled data and assigns a pseudolabel to these data. These pseudolabeled data will either be added to the existing labeled data (in-distribution) to increase the number of shots or added as new classes if the data are out-of-distribution. The proposed KTNet technique can work in all cases, such as a small amount of labeled data exist for all classes, labeled data exist for a subset of all classes, and if existing data for all classes are unlabeled. KTNet is evaluated using two benchmark datasets (mini-ImageNet and fewshot-CIFAR). The results show that the proposed network outperforms state-of-the-art models in both datasets with respect to classification accuracy. Also, KTNet is better than existing techniques at detecting and clustering the out-of-distribution classes.
Loading