Interactive Visual Feature Search

Published: 27 Oct 2023, Last Modified: 23 Nov 2023NeurIPS XAIA 2023EveryoneRevisionsBibTeX
TL;DR: We introduce a novel interactive tool for visualizing internal CNN features which can easily be utilized to visualize almost any computer vision model.
Abstract: Many visualization techniques have been created to explain the behavior of computer vision models, but they largely consist of static diagrams that convey limited information. Interactive visualizations allow users to more easily interpret a model's behavior, but most are not easily reusable for new models. We introduce Visual Feature Search, a novel interactive visualization that is adaptable to any CNN and can easily be incorporated into a researcher's workflow. Our tool allows a user to highlight an image region and search for images from a given dataset with the most similar model features. We demonstrate how our tool elucidates different aspects of model behavior by performing experiments on a range of applications, such as in medical imaging and wildlife classification.
Submission Track: Demo Track
Application Domain: Computer Vision
Survey Question 1: Many recent works in computer vision interpretability have focused on building interactive tools to visualize models, as these tools are capable of displaying lots of data while simultaneously being easy to use. However, most existing interactive tools were designed to work with a handful of pre-selected CNNs and cannot easily be adapted to visualize novel models, so they are unfortunately rarely used by researchers. In this paper, we introduce a new interactive explainability tool for computer vision models that is designed to quickly and easily visualize arbitrary models; our goal is to enable researchers and practitioners to use our interactive visualization technique whenever they wish to visualize a new model.
Survey Question 2: Computer vision models are typically very large and complex, so it is difficult to analyze the intermediate feature data of a model to verify that a particular experiment worked, debug an erroneous prediction, or perform other tasks for improving the performance of a model. By creating a new technique for interactively visualizing computer vision models, we aim to allow other researchers to more quickly and intuitively understand the internal state of their own models, rather than relying on static visualization tools that do not convey as much information.
Survey Question 3: We introduce a novel explainability method that performs similarity search to visualize what a CNN considers to be most "similar" to a given input image region.
Submission Number: 37
Loading