OMG: Orthogonal Method of Grouping With Application of K-Shot Learning

Haoqi Fan, Yu Zhang, Kris M. Kitani

Nov 04, 2016 (modified: Dec 13, 2016) ICLR 2017 conference submission readers: everyone
  • Abstract: Training a classifier with only a few examples remains a significant barrier when using neural networks with large number of parameters. Though various specialized network architectures have been proposed for these k-shot learning tasks to avoid overfitting, a question remains: is there a generalizable framework for the k-shot learning problem that can leverage existing deep models as well as avoid model overfitting? In this paper, we proposed a generalizable k-shot learning framework that can be used on any pre-trained network, by grouping network parameters to produce a low-dimensional representation of the parameter space. The grouping of the parameters is based on an orthogonal decomposition of the parameter space. To avoid overfitting, groups of parameters will be updated together during the k-shot training process. Furthermore, this framework can be integrated with any existing popular deep neural networks such as VGG, GoogleNet, ResNet, without any changes in the original network structure or any sacrifices in performance. We evaluate our framework on a wide range of intra/inter-dataset k-shot learning tasks and show state-of-the-art performance.
  • Conflicts: u.northwestern.edu, baidu.com, gmail.com, northwestern.edu, google.com, eecs.berkeley.edu, cs.cmu.edu, cmu.edu, andrew.cmu.edu

Loading