Keywords: equivariance, symmetry, symmetry breaking, canonicalization, data augmentation
TL;DR: We present an interpretable metric for quantifying the degree of distributional symmetry-breaking in point cloud datasets and use show that benchmark point cloud datasets are highly anisotropic.
Abstract: Symmetry-aware methods for machine learning, such as data augmentation and equivariant architectures, encourage correct model behavior on all transformations (e.g. rotations or permutations) of the original dataset. These methods can impart improved generalization and sample efficiency, under the assumption that the transformed datapoints are highly probable, or "important", under the test distribution. In this work, we develop a method for critically evaluating this assumption. In particular, we quantify the amount of anisotropy, or symmetry-breaking, in a dataset, via a two-sample neural classifier test that distinguishes between the original dataset and its randomly augmented equivalent. We validate our metric on synthetic datasets, and then use it to uncover surprisingly high degrees of anisotropy in several benchmark point cloud datasets. In theory, this kind of distributional symmetry-breaking can actually preclude invariant methods from performing optimally even when the underlying labels truly are invariant, as we show for invariant ridge regression in the infinite feature limit. In practice, we find that the implication for symmetry-aware methods is dataset-dependent: equivariant methods still impart benefits on some anisotropic datasets, but not others. Overall, these findings suggest that understanding equivariance — both when it works, and why — may require rethinking symmetry biases in the data.
Supplementary Material: zip
Primary Area: applications to physical sciences (physics, chemistry, biology, etc.)
Submission Number: 15601
Loading