Improving Invariance and Equivariance Properties of Convolutional Neural NetworksDownload PDF

27 May 2022, 03:29 (edited 13 Dec 2016)ICLR 2017 conference submissionReaders: Everyone
  • TL;DR: Data augmentation shapes internal network representation and makes predictions robust to input transformations.
  • Abstract: Convolutional Neural Networks (CNNs) learn highly discriminative representations from data, but how robust and structured are these representations? How does the data shape the internal network representation? We shed light on these questions by empirically measuring the invariance and equivariance properties of a large number of CNNs trained with various types of input transformations. We find that CNNs learn invariance wrt all 9 tested transformation types and that invariance extends to transformations outside the training range. We also measure the distance between CNN representations and show that similar input transformations lead to more similar internal representations. Transforms can be grouped by the way they affect the learned representation. Additionally, we also propose a loss function that aims to improve CNN equivariance.
  • Keywords: Deep learning
  • Conflicts:
12 Replies