Rotation invariance vs non-invariance: Why not both in one network?

NeurIPS 2025 Workshop NeurReps Submission19 Authors

20 Aug 2025 (modified: 29 Oct 2025)Submitted to NeurReps 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Point-Cloud Classification, Rotation-Invariant Networks, Rotation-Equivariant Deep Learning, Geometric Deep Learning, Ensemble Learning, Data-Augmentation-Free Training, Robustness to Geometric Transformations
TL;DR: Combine rotation-dependent and rotation-invariant features for point-cloud classification: stay rotation robust and expressive
Abstract: The lack of robustness of many state‑of‑the‑art computer vision models to geometric transformations (rotation, scaling, etc.) has long been a recognized problem. While this issue is tackled with data augmentation, geometric deep learning offers an elegant augmentation-free solution at the cost of lower performance on standard benchmarks. We investigate whether we can obtain the best of both worlds. We explore the idea of combining rotation-dependent and rotation-invariant features for point-cloud classification. A simple ensemble of regular and rotation-invariant deep point-cloud networks with a joint classifier head and a simple loss function boosts point-cloud classification and increases robustness to the arbitrary rotations. The presented approach works without rotation augmentation during training and is applicable to both pose-aligned and non-aligned datasets.
Submission Number: 19
Loading