Explaining an Image Classifier with a Generative Model Conditioned by Uncertainty

Published: 01 Jan 2023, Last Modified: 11 Apr 2025PKDD/ECML Workshops (2) 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Identifying sources of uncertainty in an image classifier is a crucial challenge. Indeed, the decision process of those models is opaque and does not necessarily correspond to what we might expect. To help characterize classifiers, generative models can be used as they allow the control of visual attributes. Here we use a generative adversarial network to generate images corresponding to how a classifier sees the image. More specifically, we consider the classifier maximum softmax probability as an uncertainty estimation and use it as an additional input to condition the generative model. This allows us to generate images that result in uncertain predictions, giving us a global view of which images are harder to classify. We can also increase the uncertainty of a given image and observe the impact of an attribute, providing a more local understanding of the decision process. We perform experiments on the MNIST dataset, augmented with corruptions. We believe that generative models are a helpful tool to explain the behavior and uncertainties of image classifiers.
Loading