Regularity in Canonicalized Models: A Theoretical Perspective
Abstract: In learning with invariances (or symmetries), canonicalization is a widely used technique that projects data onto a smaller subset of the input space to reduce associated redundancies. The transformed dataset is then processed through a function from a designated function class to obtain the final invariant representation.
Although canonicalization is often simple and flexible, both theoretical and empirical evidence suggests that the projection map can be discontinuous and unstable, which poses challenges for machine learning applications. However, the overall end-to-end representation can still remain continuous.
Focusing on the importance of end-to-end regularity rather than the projection mapping itself, this paper explores the continuity and regularity of canonicalized models from a theoretical perspective. In a broad setting of input spaces and group actions, we establish necessary and sufficient conditions for the continuity or regularity of canonicalized models of any order, thereby characterizing the minimal conditions required for stability.
To our knowledge, this represents the first comprehensive investigation into the end-to-end regularity of canonicalized models, offering critical insights into their design and application, as well as guidance for enhancing stability in practical settings.
Submission Number: 1823
Loading