Abstract: Conventional deep learning classifiers are static in the sense that they are trained on
a predefined set of classes and learning to classify a novel class typically requires
re-training. In this work, we address the problem of Low-shot network-expansion
learning. We introduce a learning framework which enables expanding a pre-trained
(base) deep network to classify novel classes when the number of examples for the
novel classes is particularly small. We present a simple yet powerful distillation
method where the base network is augmented with additional weights to classify
the novel classes, while keeping the weights of the base network unchanged. We
term this learning hard distillation, since we preserve the response of the network
on the old classes to be equal in both the base and the expanded network. We
show that since only a small number of weights needs to be trained, the hard
distillation excels for low-shot training scenarios. Furthermore, hard distillation
avoids detriment to classification performance on the base classes. Finally, we
show that low-shot network expansion can be done with a very small memory
footprint by using a compact generative model of the base classes training data
with only a negligible degradation relative to learning with the full training set.
TL;DR: In this paper, we address the problem of Low-shot network-expansion learning
Keywords: Low-Shot Learning, class incremental learning, Network expansion, Generative model, Distillation
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/generative-low-shot-network-expansion/code)
7 Replies
Loading