Exploring XAI for the Arts: Explaining Latent Space in Generative MusicDownload PDF

Published: 17 Oct 2021, Last Modified: 12 Mar 2024XAI 4 Debugging Workshop @ NEURIPS 2021 PosterReaders: Everyone
Keywords: explainable AI, XAI, latent space, latent space regularisation, creative AI, XAI arts, generative music
TL;DR: As a first step in exploring Explainable AI for the arts we demonstrate how a latent variable model, specifically MeasureVAE which generates measures of music, can be made more explainable.
Abstract: Explainable AI has the potential to support more interactive and fluid co-creative AI systems which can creatively collaborate with people. To do this, creative AI models need to be amenable to debugging by offering eXplainable AI (XAI) features which are inspectable, understandable, and modifiable. However, currently there is very little XAI for the arts. In this work, we demonstrate how a latent variable model for music generation can be made more explainable; specifically we extend MeasureVAE which generates measures of music. We increase the explainability of the model by: i) using latent space regularisation to force some specific dimensions of the latent space to map to meaningful musical attributes, ii) providing a user interface feedback loop to allow people to adjust dimensions of the latent space and observe the results of these changes in real-time, iii) providing a visualisation of the musical attributes in the latent space to help people understand and predict the effect of changes to latent space dimensions. We suggest that in doing so we bridge the gap between the latent space and the generated musical outcomes in a meaningful way which makes the model and its outputs more explainable and more debuggable. The code repository can be found at: https://github.com/bbanar2/Exploring_XAI_in_GenMus_via_LSR .
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/arxiv:2308.05496/code)
0 Replies

Loading