Co-Attentive Equivariant Neural Networks: Focusing Equivariance On Transformations Co-Ocurring in Data

Anonymous

Sep 25, 2019 ICLR 2020 Conference Blind Submission readers: everyone Show Bibtex
  • TL;DR: We utilize attention to restrict equivariant neural networks to the set or co-ocurring transformations in data.
  • Abstract: Equivariance is a nice property to have as it produces much more parameter efficient neural architectures and preserves the structure of the input through the feature mapping. Even though some combinations of transformations might never appear (e.g. a face with a horizontal nose) current equivariant architectures consider the set of all possible transformations in the transformation group while generating feature representations. Contrarily, the human visual system is able to attend to the set of relevant transformations occurring in the environment as to assist and improve object recognition. Based on this observation, we modify conventional equivariant feature mappings such that they are able to attend to the set of co-occurring transformations in data. Our experiments show that neural networks utilizing co-attentive equivariant feature mappings consistently outperform those utilizing conventional ones both for fully (rotated MNIST) and partially (CIFAR-10) rotational settings.
  • Code: https://www.dropbox.com/sh/2gghao89strdotw/AAAYJ6XclnfeoS3AfN9Z-n5Wa?dl=0
  • Keywords: Equivariant Neural Networks, Attention Mechanisms, Deep Learning
0 Replies

Loading