An Expert-Aligned Toolbox for Explainable AI in Animal Communication

Published: 02 Oct 2025, Last Modified: 02 Oct 2025NeurIPS 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Explainable AI, XAI, Bioacoustics, Animal communication, Participatory annotation, Saliency maps, spectrogram, occlusion, Individual identification, Capuchin, monkeys, Joint, embeddings, MRMR
TL;DR: We present an expert-aligned toolbox that adapts computer vision explainability methods to bioacoustics, enabling participatory evaluation and scientific discovery in animal communication.
Abstract: Explainable AI (XAI) remains underdeveloped in bioacoustics, despite the growing reliance on high-performance black-box models. We evaluate the explainability of state-of-the-art models for capuchin monkey individual identification and introduce new methods to make bioacoustic classifiers more interpretable. Our approach combines participatory evaluation with domain experts through a web-based interface, with quantitative metrics that assess alignment between saliency maps and expert annotations. Specifically, we report metrics on ranking quality, spatial overlap and distributional similarity. Each metric is computed under complementary feature importance formulations. To facilitate annotation, we introduce a web interface for pixel-level spectrogram labeling with interactive, mask-exclusive audio playback, allowing experts to listen separately to masked foreground or background regions and optional semi-automated segmentation. Together, these tools provide a reproducible framework for benchmarking explainability in bioacoustic models, advancing toward more transparent, collaborative, and biologically meaningful AI for animal communication.
Submission Number: 36
Loading