Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Improving Invariance and Equivariance Properties of Convolutional Neural Networks
Christopher Tensmeyer, Tony Martinez
Nov 05, 2016 (modified: Dec 13, 2016)ICLR 2017 conference submissionreaders: everyone
Abstract:Convolutional Neural Networks (CNNs) learn highly discriminative representations from data, but how robust and structured are these representations? How does the data shape the internal network representation? We shed light on these questions by empirically measuring the invariance and equivariance properties of a large number of CNNs trained with various types of input transformations. We find that CNNs learn invariance wrt all 9 tested transformation types and that invariance extends to transformations outside the training range. We also measure the distance between CNN representations and show that similar input transformations lead to more similar internal representations. Transforms can be grouped by the way they affect the learned representation. Additionally, we also propose a loss function that aims to improve CNN equivariance.
TL;DR:Data augmentation shapes internal network representation and makes predictions robust to input transformations.
Enter your feedback below and we'll get back to you as soon as possible.