TL;DR: We quantitatively study out-of-distribution detection in few-shot setting, establish baseline results with ProtoNet, MAML, ABML, and improved upon them.
Abstract: In many real-world settings, a learning model must perform few-shot classification: learn to classify examples from unseen classes using only a few labeled examples per class.
Additionally, to be safely deployed, it should have the ability to detect out-of-distribution inputs: examples that do not belong to any of the classes.
While both few-shot classification and out-of-distribution detection are popular topics,
their combination has not been studied. In this work, we propose tasks for out-of-distribution detection in the few-shot setting and establish benchmark datasets, based on four popular few-shot classification datasets. Then, we propose two new methods for this task and investigate their performance.
In sum, we establish baseline out-of-distribution detection results using standard metrics on new benchmark datasets and show improved results with our proposed methods.
Keywords: few-shot classification, out-of-distribution detection, uncertainty estimate
Code: https://drive.google.com/open?id=1LU1B6pK19AvZeXtjSzCGAqXfuYtUA442
Original Pdf: pdf
9 Replies
Loading