AutoProtoNet: Interpretability for Prototypical NetworksDownload PDF

Anonymous

30 Sept 2021 (modified: 12 Mar 2024)NeurIPS 2021 Workshop MetaLearn Blind SubmissionReaders: Everyone
Keywords: meta-learning, prototypical networks, ProtoNets, interpretability, autoencoder, prototype visualization
TL;DR: We introduce AutoProtoNet, which merges ideas from autoencoders and Prototypical Networks, to perform few-shot image classification and prototype reconstruction for interpretability.
Abstract: In meta-learning approaches, it is difficult for a practitioner to make sense of what kind of representations the model employs. Without this ability, it can be difficult to both understand what the model knows as well as to make meaningful corrections. To address these challenges, we introduce AutoProtoNet, which builds interpretability into Prototypical Networks by training an embedding space suitable for reconstructing inputs, while remaining convenient for few-shot learning. We demonstrate how points in this embedding space can be visualized and used to understand class representations. We also devise a prototype refinement method, which allows a human to debug inadequate classification parameters. We use this debugging technique on a custom classification task and find that it leads to accuracy improvements on a validation set consisting of in-the-wild images. We advocate for interpretability in meta-learning approaches and show that there are interactive ways for a human to enhance meta-learning algorithms.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2204.00929/code)
0 Replies

Loading