Investigation of Capsule Networks Regarding their Potential of Explainability and Image Rankings

Published: 01 Jan 2022, Last Modified: 08 Oct 2024ICAART (3) 2022EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Explainable Artificial Intelligence (AI) is a long-ranged goal, which can be approached from different viewpoints. One way is to simplify the complex AI model into an explainable one, another way uses post- processing to highlight the most important input features for the classification. In this work, we focus on the explanation of image classification using capsule networks with dynamic routing. We train a capsule network on the EMNIST letter dataset and examine the model regarding its explanatory potential. We show that the length of the class specific vectors (squash vectors) of the capsule network can be interpreted as predicted probability and it correlates with the agreement between the decoded image and the original image. We use the predicted probabilities to rank images within one class. By decoding different squash vectors, we visualize the interpretation of the image as the corresponding classes. Eventually, we create a set of modified letters to examine which features con
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview