Neural Symmetry Detection for Learning Neural Network Constraints

Published: 16 Jun 2024, Last Modified: 16 Jun 2024HiLD at ICML 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: symmetry detection, Lie theory, weight constraints
TL;DR: We investigate how the matrix exponential can be leveraged to learn continuous symmetry transformations by performing it in latent space.
Abstract: Neural symmetry detection can be defined as the deep learning-aided task of recovering both the nature of the transformation that relates points in a data set and the distribution with respect to the magnitude of the transformation. Applications range from automatic data augmentation to model selection. In this work, we investigate how the matrix exponential can be leveraged to recover the correct symmetry transformation, encoded as a generator of a Lie group for various transformations, both affine and non-affine. In order to make the calculation of the matrix exponential tractable, this operation is performed in a low-dimensional latent space. Additionally, a loss term is introduced to enforce matching the generator in latent space to the one in pixel-space.
Student Paper: No
Submission Number: 68
Loading