Abstract: This paper focuses on the problem of transplanting category-and-task-specific neural networks to a generic, modular network without strong supervision. Unlike traditional deep neural networks (DNNs) with black-box representations, we design a functionally modular network architecture, which divides the entire DNN into several functionally meaningful modules. Like building LEGO blocks, we can teach the proposed DNN a new object category by directly transplanting the module corresponding to the object category from another DNN, with a few or even without sample annotations. Our method incrementally adds new categories to the DNN, which do not affect representations of existing categories. Such a strategy of incremental network transplanting can avoid the typical catastrophic-forgetting problem in continual learning. We further develop a back distillation method to overcome challenges of model optimization in network transplanting. In experiments, our method with much fewer training samples outperformed baselines.
Loading