Learning Polynomial Problems with SL(2)-Equivariance

Published: 18 Jun 2023, Last Modified: 02 Jul 2023TAGML2023 PosterEveryoneRevisions
Keywords: equivariance, invariance, polynomials, noncompact, special linear group, data augmentation
TL;DR: We propose machine learning approaches, which are equivariant with respect to the non-compact group of area-preserving transformations SL(2,R), for learning to solve polynomial optimization problems.
Abstract: We introduce a set of polynomial learning problems that are equivariant to the non-compact group $SL(2,\mathbb{R})$. $SL(2,\mathbb{R})$ consists of area-preserving linear transformations, and captures the symmetries of a variety of polynomial-based problems not previously studied in the machine learning community, such as verifying positivity (for e.g. sum-of-squares optimization) and minimization. While compact groups admit many architectural building blocks, such as group convolutions, non-compact groups do not fit within this paradigm and are therefore more challenging. We consider several equivariance-based learning approaches for solving polynomial problems, including both data augmentation and a fully $SL(2,\mathbb{R})$-equivariant architecture for solving polynomial problems. In experiments, we broadly demonstrate that machine learning provides a promising alternative to traditional SDP-based baselines, achieving tenfold speedups while retaining high accuracy. Surprisingly, the most successful approaches incorporate only a well-conditioned subset of $SL(2,\mathbb{R})$, rather than the entire group. This provides a rare example of a symmetric problem where data augmentation outperforms full equivariance, and provides interesting lessons for other problems with non-compact symmetries.
Supplementary Materials: zip
Type Of Submission: Extended Abstract (4 pages, non-archival)
Submission Number: 67
Loading