Group invariants of minimum distortion

Published: 25 Mar 2025, Last Modified: 20 May 2025SampTA 2025 InvitedTalkEveryoneRevisionsBibTeXCC BY 4.0
Session: Invariant theory for machine learning (Dustin Mixon, Soledad Villar)
Keywords: group invariants, distortion
Abstract: In many applications, data arises that should properly be understood "modulo" an inherent symmetry. For example, an audio signal can be sampled and represented as a vector, but for classification tasks the vector should be understood modulo translation. Likewise, a point cloud can be represented as columns of a matrix, but the resulting matrix should be understood modulo column permutations. Data like this naturally resides in a quotient metric space $X$ that collapses all members of a common group orbit from euclidean space down to a single point in $X$. To work with the data, we first apply some embedding $f \colon X \to E$ of the quotient space into euclidean space, and then we apply a machine learning algorithm on the embedded data. For many tasks, the effectiveness of this workflow depends on the distortion of the embedding $f$, that is, the ratio of its optimal upper and lower Lipschitz bounds. In this talk, we discuss the problem of constructing embeddings with minimal distortion. Based on joint work with Jameson Cahill and Dustin Mixon.
Submission Number: 32
Loading