Keywords: Group Theory, Representation Theory, Feature learning, geometric deep learning, homomorphic
TL;DR: When learning over symmetry groups, a learned group representation of group elements works well as a feature representation.
Abstract: Group theory has been used in machine learning to provide a theoretically grounded approach for incorporating known symmetry transformations in tasks from robotics to protein modeling. In these applications, equivariant neural networks use known
symmetry groups with predefined representations to learn over geometric input data. We propose MatrixNet, a neural network architecture that learns matrix representations of group element inputs instead of using predefined representations. MatrixNet achieves higher sample efficiency and generalization over several standard baselines in prediction tasks over the several finite groups and the Artin braid group. We also show that MatrixNet respects group relations allowing generalization to group elements of greater word length than in the training set. Our code is available at https://github.com/lucas-laird/MatrixNet.
Primary Area: Deep learning architectures
Submission Number: 17920
Loading