Keywords: few-shot learning, constellation models
Abstract: The success of deep convolutional neural networks builds on top of the learning of effective convolution operations, capturing a hierarchy of structured features via filtering, activation, and pooling. However, the explicit structured features, e.g. object parts, are not expressive in the existing CNN frameworks. In this paper, we tackle the few-shot learning problem and make an effort to enhance structured features by expanding CNNs with a constellation model, which performs cell feature clustering and encoding with a dense part representation; the relationships among the cell features are further modeled by an attention mechanism. With the additional constellation branch to increase the awareness of object parts, our method is able to attain the advantages of the CNNs while making the overall internal representations more robust in the few-shot learning setting. Our approach attains a significant improvement over the existing methods in few-shot learning on the CIFAR-FS, FC100, and mini-ImageNet benchmarks.
One-sentence Summary: We tackle the few-shot learning problem by introducing an explicit cell feature clustering procedure with relation learning via self-attention.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Code: [![github](/images/github_icon.svg) mlpc-ucsd/ConstellationNet](https://github.com/mlpc-ucsd/ConstellationNet)
Data: [CIFAR-FS](https://paperswithcode.com/dataset/cifar-fs), [FC100](https://paperswithcode.com/dataset/fc100), [mini-Imagenet](https://paperswithcode.com/dataset/mini-imagenet)