Towards the Characterization of Representations Learned via Capsule-based Network Architectures

20 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: visualization or interpretation of learned representations
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: capsule networks, interpretation, hierarchical relationships, explanations, representation learning, perturbations, part-whole relationships
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: we are assessing the interpretable capabilities of several capsule networks by analyzing the level to which part-whole relationships are encoded within the learned representation.
Abstract: Capsule Networks (CapsNets) have been re-introduced as a more compact and interpretable alternative to standard deep neural networks. While recent efforts have proved their compression capabilities, to date, their interpretability properties have not been fully assessed. Here, we conduct a systematic and principled study towards assessing the interpretability of these types of networks. We pay special attention towards analyzing the level to which part-whole relationships are encoded within the learned representation. Our analysis in the MNIST, SVHN, PASCAL-part and CelebA datasets on several capsule-based architectures suggest that the representations encoded in CapsNets might not be as disentangled nor strictly related to parts-whole relationships as is commonly stated in the literature.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 2884
Loading