Trusting SVM for Piecewise Linear CNNs

Leonard Berrada, Andrew Zisserman, M. Pawan Kumar

Nov 04, 2016 (modified: Mar 03, 2017) ICLR 2017 conference submission readers: everyone
  • Abstract: We present a novel layerwise optimization algorithm for the learning objective of Piecewise-Linear Convolutional Neural Networks (PL-CNNs), a large class of convolutional neural networks. Specifically, PL-CNNs employ piecewise linear non-linearities such as the commonly used ReLU and max-pool, and an SVM classifier as the final layer. The key observation of our approach is that the prob- lem corresponding to the parameter estimation of a layer can be formulated as a difference-of-convex (DC) program, which happens to be a latent structured SVM. We optimize the DC program using the concave-convex procedure, which requires us to iteratively solve a structured SVM problem. This allows to design an opti- mization algorithm with an optimal learning rate that does not require any tuning. Using the MNIST, CIFAR and ImageNet data sets, we show that our approach always improves over the state of the art variants of backpropagation and scales to large data and large network settings.
  • TL;DR: Formulating CNN layerwise optimization as an SVM problem
  • Conflicts: ox.ac.uk

Loading