Finding, visualizing, and quantifying latent structure across diverse animal vocal repertoiresDownload PDF

Jun 13, 2020ICML 2020 Workshop SAS SubmissionReaders: Everyone
  • Keywords: unsupervised learning, audio, birdsong, animal communication
  • TL;DR: We model latent acoustic structure in diverse vocal communication signals and find patterns common across species.
  • Abstract: Animals produce vocalizations that range in complexity from a single repeated call to hundreds of unique vocal elements patterned in sequences unfolding over hours. Characterizing complex vocalizations can require considerable effort and a deep intuition about each species' vocal behavior. Even with a great deal of experience, human characterizations of animal communication can be affected by human perceptual biases. We present a set of computational methods for projecting animal vocalizations into low dimensional latent representational spaces that are directly learned from the \thl{spectrograms of vocal signals}. We apply these methods to diverse datasets from over 20 species, including humans, bats, songbirds, mice, cetaceans, and nonhuman primates. Latent projections uncover complex features of data in visually intuitive and quantifiable ways, enabling high-powered comparative analyses of unbiased acoustic features. We introduce methods for analyzing vocalizations as both discrete sequences and as continuous latent variables. Each method can be used to disentangle complex spectro-temporal structure and observe long-timescale organization in communication.
  • Double Submission: Yes
4 Replies