Abstract: Learning invariant representations has been the longstanding approach to self-supervised learning. However, recently progress has been made in preserving equivariant properties in representations, yet do so with highly prescribed architectures. In this work, we propose an
invariant-equivariant self-supervised architecture that employs Capsule Networks (CapsNets), which have been shown to capture equivariance with respect to novel viewpoints. We demonstrate that the use of CapsNets in equivariant self-supervised architectures achieves improved downstream performance on equivariant tasks with higher efficiency and fewer network parameters. To accommodate the architectural changes of CapsNets, we introduce a new objective function based on entropy minimisation. This approach, which we name CapsIE (Capsule Invariant Equivariant Network), achieves state-of-the-art performance on the equivariant rotation tasks on the 3DIEBench dataset compared to prior equivariant SSL methods, while performing competitively against supervised counterparts. Our results demonstrate the ability of CapsNets to learn complex and generalised representations for large-scale, multi-task datasets compared to previous CapsNet benchmarks.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: We would like to thank the Action Editor and the Reviewers for their constructive comments and valuable insights, which have significantly improved our paper.
We have submitted the camera-ready version, which addresses all four major points as well as the eight minor issues raised. In particular:
a) We now explicitly note that some experiments are based on a single run due to computational constraints.
b) Terminology around the objective functions and rotation parameterisation has been made consistent across the main text and appendix.
c) R² values have been standardised to avoid ambiguity.
d) We have clarified the description of MOVi-E fine-tuning and refined the conclusion to accurately reflect the colour prediction results.
e) We now explicitly state the number of seeds used in results, specify additional efficiency details, and provide a permanent link to the released code.
Code: https://github.com/AberdeenML/CapsIE
Supplementary Material: zip
Assigned Action Editor: ~Yoshinobu_Kawahara1
Submission Number: 4991
Loading