yammer

Published: 27 Sept 2025, Last Modified: 09 Nov 2025NeurIPS Creative AI Track 2025EveryoneRevisionsBibTeXCC BY 4.0
Track: Artwork
Keywords: audio classification, audio installation, sound art
TL;DR: An interactive audio installation that uses YAMNet to classify live input and create an immersive soundscape by weaving together elements of AudioSet in same event class
Abstract: yammer is an interactive audio installation and performance environment that questions the ambiguities and limitations inherent in attempts to describe and represent music and other complex human expressive sonic events using commonplace ontologies in audio classification systems and large language models. Live audio produced by visitors to the installation undergoes audio classification using YAMNet, and an immersive soundscape is created by combining the live audio input with playback and processing of elements of the AudioSet dataset belonging to the same putative audio event classes, often to humorous and nonsensical ends. Ultimately, yammer entreats those engaging with the installation to question not only the datasets used in audio classification, but also the datasets underlying many models with which they may engage with on a daily basis. Additionally, it questions the artistic utility of text-to-sound and text-to-music models.
Thumbnail Image For Artwork: png
Video Preview For Artwork: mp4
Submission Number: 233
Loading