Few-Shot Learning with SimplexDownload PDF

15 Feb 2018 (modified: 05 May 2023)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: Deep learning has made remarkable achievement in many fields. However, learning the parameters of neural networks usually demands a large amount of labeled data. The algorithms of deep learning, therefore, encounter difficulties when applied to supervised learning where only little data are available. This specific task is called few-shot learning. To address it, we propose a novel algorithm for fewshot learning using discrete geometry, in the sense that the samples in a class are modeled as a reduced simplex. The volume of the simplex is used for the measurement of class scatter. During testing, combined with the test sample and the points in the class, a new simplex is formed. Then the similarity between the test sample and the class can be quantized with the ratio of volumes of the new simplex to the original class simplex. Moreover, we present an approach to constructing simplices using local regions of feature maps yielded by convolutional neural networks. Experiments on Omniglot and miniImageNet verify the effectiveness of our simplex algorithm on few-shot learning.
TL;DR: A simplex-based geometric method is proposed to cope with few-shot learning problems.
Keywords: One-shot learning, few-shot learning, deep learning, simplex
Withdrawal: Confirmed
7 Replies

Loading