Self-supervised class-balanced active learning with uncertainty-mastery fusion

Published: 01 Jan 2024, Last Modified: 15 May 2025Knowl. Based Syst. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Deep active learning (DeepAL) offers a viable solution for enhancing the predictive performance of models with limited annotation costs. Popular DeepAL pipelines fail to (1) adequately and simultaneously draw on unlabeled and labeled samples to learn accurate data representations, and (2) progressively balance the distribution of selected samples across classes during the query. In this paper, we introduce a self-supervised DeepAL (SSAL) framework as a solution to the aforementioned challenges. First, SSAL simultaneously utilizes both labeled data and unlabeled data to learn high-quality representations. The method learns the image rotation degree for unlabeled data in a self-supervised manner, which can be jointly processed with the supervised learning. Second, the sample information is interpreted by fusing the discriminant uncertainty and the presumed mastery ability of each sample. This approach prevents the selection of many nonrepresentative boundary samples based solely on uncertainty and avoids the selection of aggregated samples based solely on clustering. Third, we propose a new sample selection strategy named class rebalanced group sampling. This strategy rebalances classes by the optimal size of query samples for each class, which is induced from the approximation between the population and the model distribution. Experiments were conducted on five public image datasets, namely CIFAR-10/100, FashionMNIST, SVHN, and TinyImageNet, and the results demonstrated the effectiveness of our approach. The code is available at https://github.com/FanSmale/SSAL.
Loading