Did You Hear That? Introducing AADG: A Framework for Generating Benchmark Data in Audio Anomaly Detection

Published: 04 Mar 2025, Last Modified: 17 Apr 2025ICLR 2025 Workshop SynthDataEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Audio Data Generation, Synthetic Data Generation, Audio Anomaly Detection, Audio Processing, Large Language Models, Generative AI
TL;DR: Generating synthetic audio data with anomalies to train audio anomaly detection models and improve current audio generation and audio language models
Abstract: We introduce a novel, general purpose audio generation framework specifically designed for Audio Anomaly Detection and Localization. Unlike existing datasets that predominantly focus on industrial and machine-related sounds, our framework focuses a broader range of environments, particularly useful in real-world scenarios where only audio data are available, such as in video-derived or telephonic audio. To generate such data, we propose a new method, Audio Anomaly Data Generation(AADG), inspired by the LLM-Modulo framework, which leverages Large Language Models(LLMs) as world models to simulate such real-world scenarios. This tool is modular, allowing for a plug-and-play approach. It works by first using LLMs to predict plausible real-world scenarios. An LLM further extracts the constituent sounds, the order and the way in which these should be merged to create coherent wholes. We include a rigorous verification of each output stage, ensuring the reliability of the generated data. The data produced using the framework serves as a benchmark for anomaly detection applications, potentially enhancing the performance of models trained on audio data, particularly in handling out-of-distribution cases. Our contributions thus fill a critical void in audio anomaly detection resources and provide a scalable tool for generating diverse, realistic audio data.
Submission Number: 50
Loading