Abstract: Classification and Out-of-Distribution (OoD) detection in the few-shot setting remain challenging aims, but are important for devising critical systems in security where samples are limited. OoD detection requires that classifiers are aware of when they do not know and avoid setting high confidence to OoD samples away from the training data distribution. To address such limitations, we propose the Few-shot ROBust (FROB) model with its key contributions being (a) the joint classification and few-shot OoD detection, (b) the sample generation on the boundary of the support of the normal class distribution, and (c) the incorporation of the learned distribution boundary as OoD data for contrastive negative training. FROB finds the boundary of the support of the normal class distribution, and uses it to improve the few-shot OoD detection performance. We propose a self-supervised learning methodology for sample generation on the normal class distribution confidence boundary based on generative and discriminative models, including classification. FROB implicitly generates adversarial samples, and forces samples from OoD, including our boundary, to be less confident by the classifier. By including the learned boundary, FROB reduces the threshold linked to the model’s few-shot robustness in the number of few-shots, and maintains the OoD performance approximately constant, independent of the number of few-shots. The low- and few-shot robustness evaluation of FROB on different image datasets and on One-Class Classification (OCC) data shows that FROB achieves competitive performance and outperforms baselines in terms of robustness to the OoD few-shot population and variability.
Loading