The PAIR-R24M Dataset for Multi-animal 3D Pose EstimationDownload PDF

Jun 08, 2021 (edited Nov 07, 2021)NeurIPS 2021 Datasets and Benchmarks Track (Round 1)Readers: Everyone
  • Keywords: Animal Behavior, Pose Estimation, Multi-agent, Dataset, Motion Capture
  • TL;DR: A 24 million frame database of video and 3D pose for interacting pairs of laboratory rats.
  • Abstract: Understanding the biological basis of social and collective behaviors in animals is a key goal of the life sciences, and may yield important insights for engineering intelligent multi-agent systems. A critical step in interrogating the mechanisms underlying social behaviors is a precise readout of the 3D pose of interacting animals. While approaches for multi-animal pose estimation are beginning to emerge, they remain challenging to compare due to the lack of standardized training and benchmark datasets. Here we introduce the PAIR-R24M (Paired Acquisition of Interacting oRganisms - Rat) dataset for multi-animal 3D pose estimation, which contains 24.3 million frames of RGB video and 3D ground-truth motion capture of dyadic interactions in laboratory rats. PAIR-R24M contains data from 18 distinct pairs of rats and 24 different viewpoints. We annotated the data with 11 behavioral labels and 3 interaction categories to facilitate benchmarking in rare but challenging behaviors. To establish a baseline for markerless multi-animal 3D pose estimation, we developed a multi-animal extension of DANNCE, a recently published network for 3D pose estimation in freely behaving laboratory animals. As the first large multi-animal 3D pose estimation dataset, PAIR-R24M will help advance 3D animal tracking approaches and aid in elucidating the neural basis of social behaviors.
  • Supplementary Material: zip
  • URL:
7 Replies