An Expert-in-the-Loop Toolbox for Explainable AI in Animal Communication

Published: 02 Oct 2025, Last Modified: 25 Nov 2025NeurIPS 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Explainable AI, XAI, Bioacoustics, Animal communication, Participatory annotation, Saliency maps, spectrogram, occlusion, Individual identification, Capuchin, monkeys, Joint, embeddings, MRMR, denoising, source separation, feature importance
TL;DR: We present an expert-in-the-loop toolbox that adapts computer vision explainability methods to bioacoustics, enhancing participatory evaluation and scientific discovery in animal communication.
Abstract: Explainable AI (XAI) remains underdeveloped in bioacoustics, despite the growing reliance on high-performance black-box models. We evaluate the explainability of state-of-the-art models for capuchin monkey individual identification and introduce new methods to make bioacoustic classifiers more interpretable. Our approach combines participatory evaluation with domain experts through a web-based interface, with quantitative metrics that assess correspondence between saliency maps and expert annotations. Specifically, we report metrics on ranking quality, spatial overlap and distributional similarity. Each metric is computed under two complementary formulations of feature importance. To facilitate annotation, we introduce a web interface for pixel-level spectrogram labeling that provides interactive foreground and background audio playback, allowing experts to listen separately to masked regions, along with optional AI-assisted segmentation. These tools provide a reproducible framework for benchmarking explainability in bioacoustic models, advancing toward more transparent, collaborative, and biologically meaningful AI for animal communication.
Submission Number: 36
Loading