Continual-Mega: A Large-scale Benchmark for Generalizable Continual Anomaly Detection

18 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Anomaly Detection, Continual Learning, Benchmark
TL;DR: We introduce a large-scale benchmark and few-shot continual learning scenario for industrial anomaly detection, enabling robust evaluation of both adaptation and zero-shot generalization across diverse and unseen classes.
Abstract: In this paper, we introduce a new benchmark for continual learning in anomaly detection, aimed at better reflecting real-world deployment scenarios. Our benchmark, Continual-MEGA, includes a large and diverse dataset that significantly expands existing evaluation settings by combining carefully curated existing datasets with our newly proposed dataset, ContinualAD. Beyond standard continual learning settings that increase the number of classes, we additionally propose a scenario that evaluates zero-shot generalization to unseen classes—those not encountered during continual adaptation, reflecting recent advances in continual zero-shot research and its highlighting practical significance. This setting introduces a new agenda for the anomaly detection field, and we conduct extensive evaluations of various existing anomaly detection algorithms designed for continual or zero-shot scenarios, as well as our proposed baseline methods. From our experiments, we derive three key findings: (1) existing methods exhibit significant limitations, particularly in pixel-level defect localization, (2) the proposed ContinualAD dataset is effective for the proposed benchmarking scenario, and (3) our baseline method suggests a promising direction for designing CLIP-based continual and generalizable frameworks through simple adaptation combined with feature synthesis.
Primary Area: datasets and benchmarks
Submission Number: 11229
Loading