Explainable AI in Music Performance: Case Studies from Live Coding and Sound Spatialisation

Published: 27 Oct 2023, Last Modified: 22 Nov 2023NeurIPS XAIA 2023EveryoneRevisionsBibTeX
Abstract: Explainable Artificial Intelligence (XAI) has emerged as a significant area of research, with diverse applications across various fields. In the realm of arts, the application and implications of XAI remain largely unexplored. This paper investigates how artist-researchers address and navigate explainability in their systems during creative AI/ML practices, focusing on music performance. We present two case studies: live coding of AI/ML models and sound spatialisation performance. In the first case, we explore the inherent explainability in live coding and how the integration of interactive and on-the-fly machine learning processes can enhance this explainability. In the second case, we investigate how sound spatialisation can serve as a powerful tool for understanding and navigating the latent dimensions of autoencoders. Our autoethnographic reflections reveal the complexities and nuances of applying XAI in the arts, and underscore the need for further research in this area. We conclude that the exploration of XAI in the arts, particularly in music performance, opens up new avenues for understanding and improving the interaction between artists and AI/ML systems. This research contributes to the broader discussion on the diverse applications of XAI, with the ultimate goal of extending the frontiers of applied XAI.
Submission Track: Full Paper Track
Application Domain: None of the above / Not applicable
Clarify Domain: Art, Music
Survey Question 1: In our work, we investigate the application of Explainable Artificial Intelligence (XAI) in music performance, specifically in the contexts of live coding and sound spatialisation. Explainability plays a crucial role in our research, as it enhances the understanding and interpretation of AI/ML-based artistic practices, providing artists with a deeper insight into the workings of their tools and systems.
Survey Question 2: Our motivation to incorporate explainability into our approach stemmed from the desire to make AI/ML systems more transparent and interpretable for artists. Methods that lack explainability often result in a 'black box' scenario, where the inner workings of the system are opaque, limiting the artist's understanding and control over the creative process.
Survey Question 3: Our approach to achieving explainability does not rely on specific methods like LIME, SHAP, GradCAM, or influence functions, but rather on the inherent explainability of live coding and the use of sound spatialisation as a tool for understanding and navigating the latent dimensions of autoencoders.
Submission Number: 33
Loading