Probing Equivariance and Symmetry Breaking in Convolutional Networks

Published: 18 Sept 2025, Last Modified: 29 Oct 2025NeurIPS 2025 posterEveryoneRevisionsBibTeXCC BY-SA 4.0
Keywords: convolution networks, geometric deep learning, equivariance, symmetry-breaking, generative modeling
TL;DR: A study on equivariant and non-equivariant convolution networks for point clouds
Abstract: In this work, we explore the trade-offs of explicit structural priors, particularly group-equivariance. We address this through theoretical analysis and a comprehensive empirical study focusing on point clouds. To enable controlled and fair comparisons, we introduce \texttt{Rapidash}, a unified group convolutional architecture that allows for different variants of equivariant and non-equivariant models. Our results suggest that more constrained equivariant models outperform less constrained alternatives when aligned with the geometry of the task, and increasing representation capacity does not fully eliminate performance gaps. We see improved performance of models with equivariance and symmetry-breaking through tasks like segmentation, regression, and generation across diverse datasets. Explicit \textit{symmetry breaking} via geometric reference frames consistently improves performance, while \textit{breaking equivariance} through geometric input features can be helpful when aligned with task geometry. Our results provide task-specific performance trends that offer a more nuanced way for model selection. Code available at [github.com/Sharvaree/EquivarianceStudy](https://github.com/Sharvaree/EquivarianceStudy)
Supplementary Material: zip
Primary Area: Deep learning (e.g., architectures, generative models, optimization for deep networks, foundation models, LLMs)
Submission Number: 10608
Loading