Continually Adapt or Not (CAN)? A Continual Learning Benchmark of Camera Trap Species Classification over Time

Published: 09 Oct 2025, Last Modified: 09 Oct 2025NeurIPS 2025 Workshop ImageomicsEveryoneRevisionsBibTeXCC BY 4.0
Submission Track: Short papers presenting ongoing research or work submitted to other venues (up to 5 pages, excluding references)
Keywords: camera trap, species classification, benchmark, dataset, continual learning, domain adaptation
TL;DR: The paper presents CAN, a benchmark of 546 camera traps framing wildlife species recognition as an online continual learning task to assess when and how adaptation improves performance.
Abstract: Camera traps offer an effective, non-invasive approach to wildlife monitoring. However, substantial variations in image style across camera setups, combined with temporal shifts in image content, pose significant challenges to developing accurate and robust image recognition models. In this paper, we present a novel benchmark for these challenges, leveraging data from 546 camera traps across 17 LILA BC datasets. We introduce a systematic data preparation pipeline inspired by the FAIR principles and formulate the task as an instance of online continual learning to better reflect the practical usage of camera traps. This approach sharply contrasts with prior studies that typically disregard the chronological structure of the data. Our study reveals several critical insights. First, using the latest vision foundation model for biological domains, BioCLIP 2, we observe a long-tailed accuracy distribution across the 546 datasets, highlighting the persistent need for model adaptation. Second, continual adaptation is generally necessary to address temporal shifts, but the required adaptation frequency may decrease over time. Third, we identify several unresolved machine learning challenges from a practical standpoint and suggest directions for future research.
Submission Number: 54
Loading