How far can we go without convolution: Improving fully-connected networks

Zhouhan Lin, Roland Memisevic, Kishore Konda

Feb 16, 2016 (modified: Feb 16, 2016) ICLR 2016 workshop submission readers: everyone
  • CMT id: 47
  • Abstract: We propose ways to improve the performance of fully connected networks. We found that two approaches in particular have a strong effect on performance: linear bottleneck layers and unsupervised pre-training using autoencoders without hidden unit biases. We show how both approaches can be related to improving gradient flow and reducing sparsity in the network. We show that a fully connected network can yield approximately 70% classification accuracy on the permutation-invariant CIFAR-10 task, which is much higher than the current state-of-the-art. By adding deformations to the training data, the fully connected network achieves 78% accuracy, which is close to the performance of a decent convolutional network.
  • Conflicts: informatik.uni-frankfurt.de, iro.umontreal.ca

Loading