Unleashing the Potential of CNNs for Interpretable Few-Shot LearningDownload PDF

15 Feb 2018 (modified: 23 Jan 2023)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: Convolutional neural networks (CNNs) have been generally acknowledged as one of the driving forces for the advancement of computer vision. Despite their promising performances on many tasks, CNNs still face major obstacles on the road to achieving ideal machine intelligence. One is that CNNs are complex and hard to interpret. Another is that standard CNNs require large amounts of annotated data, which is sometimes very hard to obtain, and it is desirable to be able to learn them from few examples. In this work, we address these limitations of CNNs by developing novel, simple, and interpretable models for few-shot learn- ing. Our models are based on the idea of encoding objects in terms of visual concepts, which are interpretable visual cues represented by the feature vectors within CNNs. We first adapt the learning of visual concepts to the few-shot setting, and then uncover two key properties of feature encoding using visual concepts, which we call category sensitivity and spatial pattern. Motivated by these properties, we present two intuitive models for the problem of few-shot learning. Experiments show that our models achieve competitive performances, while being much more flexible and interpretable than alternative state-of-the-art few-shot learning methods. We conclude that using visual concepts helps expose the natural capability of CNNs for few-shot learning.
TL;DR: We enable ordinary CNNs for few-shot learning by exploiting visual concepts which are interpretable visual cues learnt within CNNs.
Keywords: Few-Shot Learning, Neural Network Understanding, Visual Concepts
Data: [ImageNet](https://paperswithcode.com/dataset/imagenet), [PASCAL3D+](https://paperswithcode.com/dataset/pascal3d-2), [mini-Imagenet](https://paperswithcode.com/dataset/mini-imagenet)
12 Replies

Loading