Canonical Capsules: Self-Supervised Capsules in Canonical PoseDownload PDF

May 21, 2021 (edited Dec 17, 2021)NeurIPS 2021 PosterReaders: Everyone
  • Keywords: object-centric representation learning, capsules, primary capsules, unsupervised, self-supervised, 3D point clouds
  • TL;DR: A self-supervised capsule architecture that canonicalizes data while simultaneously decomposing point clouds into parts to perform unsupervised representation learning.
  • Abstract: We propose a self-supervised capsule architecture for 3D point clouds. We compute capsule decompositions of objects through permutation-equivariant attention, and self-supervise the process by training with pairs of randomly rotated objects. Our key idea is to aggregate the attention masks into semantic keypoints, and use these to supervise a decomposition that satisfies the capsule invariance/equivariance properties. This not only enables the training of a semantically consistent decomposition, but also allows us to learn a canonicalization operation that enables object-centric reasoning. To train our neural network we require neither classification labels nor manually-aligned training datasets. Yet, by learning an object-centric representation in a self-supervised manner, our method outperforms the state-of-the-art on 3D point cloud reconstruction, canonicalization, and unsupervised classification.
  • Supplementary Material: zip
  • Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
  • Code: https://github.com/canonical-capsules/canonical-capsules
14 Replies

Loading