Warped Convolutions: Efficient Invariance to Spatial Transformations

Joao F. Henriques, Andrea Vedaldi

Nov 04, 2016 (modified: Jan 17, 2017) ICLR 2017 conference submission readers: everyone
  • Abstract: Convolutional Neural Networks (CNNs) are extremely efficient, since they exploit the inherent translation-invariance of natural images. However, translation is just one of a myriad of useful spatial transformations. Can the same efficiency be attained when considering other spatial invariances? Such generalized convolutions have been considered in the past, but at a high computational cost. We present a construction that is simple and exact, yet has the same computational complexity that standard convolutions enjoy. It consists of a constant image warp followed by a simple convolution, which are standard blocks in deep learning toolboxes. With a carefully crafted warp, the resulting architecture can be made equivariant to a wide range of 2-parameters spatial transformations. We show encouraging results in realistic scenarios, including the estimation of vehicle poses in the Google Earth dataset (rotation and scale), and face poses in Annotated Facial Landmarks in the Wild (3D rotations under perspective).
  • Conflicts: robots.ox.ac.uk, isr.uc.pt

Loading