Flopping for FLOPs: Leveraging Equivariance for Computational Efficiency

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 spotlightposterEveryoneRevisionsBibTeXCC BY-SA 4.0
TL;DR: Equivariant neural networks for image input are scalable.
Abstract: Incorporating geometric invariance into neural networks enhances parameter efficiency but typically increases computational costs. This paper introduces new equivariant neural networks that preserve symmetry while maintaining a comparable number of floating-point operations (FLOPs) per parameter to standard non-equivariant networks. We focus on horizontal mirroring (flopping) invariance, common in many computer vision tasks. The main idea is to parametrize the feature spaces in terms of mirror-symmetric and mirror-antisymmetric features, i.e., irreps of the flopping group. This decomposes the linear layers to be block-diagonal, requiring half the number of FLOPs. Our approach reduces both FLOPs and wall-clock time, providing a practical solution for efficient, scalable symmetry-aware architectures.
Lay Summary: We introduce new and improved neural networks for image processing. A neural network is a program with many free parameters that can be tuned on training data. In our case, the training data consists of images and corresponding labels, and we aim to train networks that can accurately classify images (image classification is a useful prototype task). Since most images are equally likely to appear in mirrored form, we enforce mirror-invariant classification by putting restrictions on the neural network parameters. This contradicts the trend of allowing neural networks to learn as much as possible from data; however, we provide new arguments for enforcing invariances rather than learning them from data. In particular, enforcing mirror-invariance yields faster neural networks.
Link To Code: https://github.com/georg-bn/flopping-for-flops
Primary Area: Applications->Computer Vision
Keywords: Equivariance, efficiency, scalable, image classification, vision
Submission Number: 11798
Loading