Scale-Equivariant Neural Networks with Decomposed Convolutional Filters

Sep 25, 2019 Blind Submission readers: everyone Show Bibtex
  • Keywords: scale-equivariant, convolutional neural network, deformation robustness
  • TL;DR: We construct scale-equivariant convolutional neural networks in the most general form with both computational efficiency and proved deformation robustness.
  • Abstract: Encoding the input scale information explicitly into the representation learned by a convolutional neural network (CNN) is beneficial for many vision tasks especially when dealing with multiscale input signals. We study, in this paper, a scale-equivariant CNN architecture with joint convolutions across the space and the scaling group, which is shown to be both sufficient and necessary to achieve scale-equivariant representations. To reduce the model complexity and computational burden, we decompose the convolutional filters under two pre-fixed separable bases and truncate the expansion to low-frequency components. A further benefit of the truncated filter expansion is the improved deformation robustness of the equivariant representation. Numerical experiments demonstrate that the proposed scale-equivariant neural network with decomposed convolutional filters (ScDCFNet) achieves significantly improved performance in multiscale image classification and better interpretability than regular CNNs at a reduced model size.
  • Original Pdf:  pdf
0 Replies