Keywords: generative AI, music generation, ethics, ai for social good, computational creativity, responsible AI
TL;DR: Generative music AI embeds cultural/genre bias that misrepresents traditions (e.g., Indian Classical), erodes creator trust, and risks cultural erasure. We propose dataset, model, and interface fixes for inclusive music-AI systems.
Abstract: In recent years, the music research community has examined risks of AI models for music, with generative AI models in particular, raised concerns about copyright, deepfakes, and transparency. In our work, we raise concerns about cultural and genre biases in AI for music systems (music-AI systems) which affect stakeholders—including $\textit{creators}$, $\textit{distributors}$, and $\textit{listeners}$—shaping representation in AI for music. These biases can misrepresent marginalized traditions, especially from the Global South, producing inauthentic outputs (e.g., distorted ragas) that reduces $\textit{creators'}$ trust on these systems. Such harms risk reinforcing biases, limiting creativity, and contributing to cultural erasure. To address this, we offer recommendations at dataset, model and interface level in music-AI systems.
Track: Paper Track
Confirmation: Paper Track: I confirm that I have followed the formatting guideline and anonymized my submission.
Submission Number: 93
Loading