Improving Invariance and Equivariance Properties of Convolutional Neural NetworksDownload PDF

19 Apr 2024 (modified: 21 Jul 2022)Submitted to ICLR 2017Readers: Everyone
Abstract: Convolutional Neural Networks (CNNs) learn highly discriminative representations from data, but how robust and structured are these representations? How does the data shape the internal network representation? We shed light on these questions by empirically measuring the invariance and equivariance properties of a large number of CNNs trained with various types of input transformations. We find that CNNs learn invariance wrt all 9 tested transformation types and that invariance extends to transformations outside the training range. We also measure the distance between CNN representations and show that similar input transformations lead to more similar internal representations. Transforms can be grouped by the way they affect the learned representation. Additionally, we also propose a loss function that aims to improve CNN equivariance.
TL;DR: Data augmentation shapes internal network representation and makes predictions robust to input transformations.
Conflicts: byu.edu
Keywords: Deep learning
12 Replies

Loading