Abstract: Large vision-language models (LVLMs) are markedly proficient in deriving visual representations guided by natural language.
Recent explorations have utilized LVLMs to tackle zero-shot visual anomaly detection (VAD) challenges by pairing images with textual descriptions indicative of normal and abnormal conditions, referred to as anomaly prompts. However, existing approaches depend on static anomaly prompts that are prone to cross-semantic ambiguity, and prioritize global image-level representations over crucial local pixel-level image-to-text alignment that is necessary for accurate anomaly localization. In this paper, we present ALFA, a training-free approach designed to address these challenges via a unified model. We propose a run-time prompt adaptation strategy, which first generates informative anomaly prompts to leverage the capabilities of a large language model (LLM). This strategy is enhanced by a contextual scoring mechanism for per-image anomaly prompt adaptation and cross-semantic ambiguity mitigation. We further introduce a novel fine-grained aligner to fuse local pixel-level semantics for precise anomaly localization, by projecting the image-text alignment from global to local semantic spaces. Extensive evaluations on the challenging MVTec and VisA datasets confirm ALFA's effectiveness in harnessing the language potential for zero-shot VAD, achieving significant PRO improvements of 12.1% on MVTec AD and 8.9% on VisA compared to state-of-the-art zero-shot VAD approaches.
Primary Subject Area: [Content] Vision and Language
Relevance To Conference: In this paper, we present a novel contribution to multimedia processing through the development of a training-free zero-shot visual anomaly detection (VAD) model, focusing on vision-language synergy. Our work addresses the critical challenge of cross-semantic ambiguity within VAD, introducing ALFA, an adaptive LLM-empowered LVLM that effectively mitigates this issue without requiring additional data or fine-tuning. Key highlights of our approach include:
We identify a previously unaddressed issue of cross-semantic ambiguity. In response, we present ALFA, an adaptive LLM-empowered model for zero-shot VAD, effectively resolving this challenge without the need for extra data or fine-tuning.
We propose a run-time prompt adaptation strategy that effectively generates informative anomaly prompts and dynamically adapts a set of anomaly prompts on a per-image basis.
We develop a fine-grained aligner that learns global to local semantic space projection, and then, generalizes this projection to support precise pixel-level anomaly localization.
Our comprehensive experiments validate ALFA's capacity for zero-shot VAD across diverse datasets. Moreover, ALFA can be readily extended to the few-shot setting, which achieves state-of-the-art results that are on par or even outperform those of full-shot and fine-tuning-based methods.
Supplementary Material: zip
Submission Number: 2959
Loading