Track: Extended Abstract Track
Keywords: XAI, Relative Representations, Uncertainty
TL;DR: Ensembles are confounded by reparameterization. By transforming the embeddings to a space of relative proximity, we show that uncertainty in the latent space decreases.
Abstract: Many explainable artificial intelligence (XAI) methods investigate the embedding space of a given neural network. Uncertainty quantification in these spaces can lead to a better understanding of the mechanisms learned by the given network. When concerned with the uncertainty of functions in latent spaces we can invoke ensembles of trained models. Such ensembles can be confounded by reparameterization, i.e., lack of identifiability. We consider two mechanisms for reducing reparametrization "noise", one based on relative representations and one based on interpolation in weight space. By sampling embedding spaces along a curve connecting two fully converged networks without an increase in loss, we show that the latent uncertainty becomes overestimated when comparing embedding spaces without considering the reparametrization issue. By changing the absolute embedding space to a space of relative proximity, we show that the spaces become aligned, and the measured uncertainty decreases. Using this method, we show that the most non-trivial changes to the latent space occur around the midpoint of the curve connecting the independently trained networks.
Submission Number: 39
Loading