Sparse Convolutions on Lie GroupsDownload PDF

Published: 07 Nov 2022, Last Modified: 05 May 2023NeurReps 2022 PosterReaders: Everyone
Keywords: sparse convolutions on lie groups symmetry relaxed equivariance invariance learning lie groups continuous convolutional filters
Abstract: Convolutional neural networks have proven very successful for a wide range of modelling tasks. Convolutional layers embed equivariance to discrete translations into the architectural structure of neural networks. Extensions have generalized continuous Lie groups beyond translation, such as rotation, scale or more complex symmetries. Other works have allowed for relaxed equivariance constraints to better model data that does not fully respect symmetries while still leveraging on useful inductive biases that equivariances provide. How continuous convolutional filters on Lie groups can best be parameterised remains an open question. To parameterise sufficiently flexible continuous filters, small MLP hypernetworks are often used in practice. Although this works, it typically introduces many additional model parameters. To be more parameter-efficient, we propose an alternative approach and define continuous filters with a small finite set of basis functions through anchor points. Regular convolutional layers appear as a special case, allowing for practical conversion between regular filters and our basis function filter formulation, at equal memory complexity. The basis function filters enable efficient construction of neural network architectures with equivariance or relaxed equivariance, outperforming baselines on vision classification tasks.
4 Replies

Loading