Prompting Scientific Names for Zero-Shot Species Recognition

Published: 07 Oct 2023, Last Modified: 01 Dec 2023EMNLP 2023 MainEveryoneRevisionsBibTeX
Submission Type: Regular Short Paper
Submission Track: Language Grounding to Vision, Robotics and Beyond
Submission Track 2: Information Retrieval and Text Mining
Keywords: vision-language model, fine-grained recognition, zero-shot recognition, prompt engineering, species recognition
TL;DR: Using the vision-language model CLIP, we address zero-shot recognition of fine-grained species and propose a simple method, which achieves 2~4 times higher accuracy by translating species scientific names to common ones used in prompts.
Abstract: Trained on web-scale image-text pairs, Vision-Language Models (VLMs) such as CLIP can recognize images of common objects in a zero-shot fashion. However, it is underexplored how to use CLIP for zero-shot recognition of highly specialized concepts, e.g., species of birds, plants, and animals, for which their scientific names are written in Latin or Greek. Indeed, CLIP performs poorly for zero-shot species recognition with prompts that use scientific names, e.g., ``a photo of Lepus Timidus'' (which is a scientific name in Latin). This is because these names are usually not included in CLIP's training set. To improve performance, we explore using large-language models (LLMs) to generate descriptions (e.g., of species color and shape) and additionally use them in prompts. However, this method improves only marginally. Instead, we are motivated to translate scientific names (e.g., Lepus Timidus) to common English names (e.g., {\tt mountain hare}) and use such in the prompts. We find that common names are more likely to be included in CLIP's training set, and prompting them achieves 2$\sim$5 times higher accuracy on benchmarking datasets of fine-grained species recognition.
Submission Number: 336
Loading