Learning Implicitly Recurrent CNNs Through Parameter Sharing

Pedro Savarese, Michael Maire

Sep 27, 2018 ICLR 2019 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: We introduce a parameter sharing scheme, in which different layers of a convolutional neural network (CNN) are defined by a learned linear combination of parameter tensors from a global bank of templates. Restricting the number of templates yields a flexible hybridization of traditional CNNs and recurrent networks. Compared to traditional CNNs, we demonstrate substantial parameter savings on standard image classification tasks, while maintaining accuracy. Our simple parameter sharing scheme, though defined via soft weights, in practice yields trained networks with near strict recurrent structure; with negligible side effects, they convert into networks with actual recurrent loops. Training these networks thus implicitly involves discovery of suitable recurrent architectures. As a consequence, our hybrid networks are not only more parameter efficient, but also learn some tasks faster. Specifically, on synthetic tasks which are algorithmic in nature, our hybrid networks both train faster and extrapolate better on test examples outside the span of the training set.
  • Keywords: deep learning, architecture search, computer vision
  • TL;DR: We propose a method that enables CNN folding to create recurrent connections
0 Replies