Abstract: Despite recent breakthroughs in the applications of deep neural networks, one challenge remains for deep model training when only limited training data available. New structure may be different from the previous models in different tasks. In addition, there may be insufficient labeled or unlabeled data to train or adapt a deep architecture to new tasks. In view of this, we propose the auxiliary structure for deep model learning with insufficient data via additional alignment layers to migrate the weight of the auxiliary model to the new model. The experimental results demonstrate that the auxiliary structure reduces the overfitting problem and can improve the accuracy using only a few training samples.
0 Replies
Loading