Exploiting model capacity by constraining within-batch features to be orthogonal

Hyo-Eun Kim

Feb 10, 2018 ICLR 2018 Workshop Submission readers: everyone Show Bibtex
  • Abstract: Deep networks have been shown to greatly benefit from large model capacity when trained using various recent deep learning techniques. But at the same time, features in such large capacity networks have a potential to be redundant. In this work, we propose a new regularization method to exploit the given network capacity effectively. By minimizing the redundancy among in-layer filters and the correlation between in-batch features at the same time, we are able to achieve better performance with the same network architecture. Experiments with CIFAR-10/100 show that simultaneously constraining both the in-layer filters to be orthonormal and the in-batch features to be orthogonal is beneficial in efficiently utilizing the model capacity.
  • Keywords: regularization
  • TL;DR: Reduce redundancy between features by controlling correlation of filters and their features concurrently
0 Replies

Loading