Sample-specific and Context-aware Augmentation for Long Tail Image ClassificationDownload PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Long-tail image classification, Semantic augmentation
Abstract: Recent long-tail classification methods generally adopt the two-stage pipeline and focus on learning the classifier to tackle the imbalanced data in the second stage via re-sampling or re-weighting, but the classifier is easily prone to overconfidence in head classes. Data augmentation is a natural way to tackle this issue. Existing augmentation methods either perform low-level transformations or apply the same semantic transformation for all samples. However, meaningful augmentations for different samples should be different. In this paper, we propose a novel sample-specific and context-aware augmentation learning method for long-tail image classification. We model the semantic within-class transformation range for each sample by a specific Gaussian distribution and design a semantic transformation generator (STG) to predict the distribution from the sample itself. To encode the context information accurately, STG is equipped with a memory-based structure. We train STG by constructing ground-truth distributions for samples of head classes in the feature space. We apply STG to samples of tail classes for augmentation in the classifier-tuning stage. Extensive experiments on four imbalanced datasets show the effectiveness of our method.
5 Replies

Loading