Keywords: fine-grained, classification, VLM, evaluation
Abstract: New vision-language models (VLMs) have made significant progress across a wide range of visual reasoning benchmarks, spanning academic benchmarks, document understanding, and general visual question answering. These improvements are evident in a wide range of VLMs built on a variety of base models, alignment architectures, and training data. However, recent works show that these models trail behind in traditional image classification benchmarks, which test fine-grained visual knowledge. We test a large number of recent VLMs on fine-grained classification benchmarks and identify potential factors in the disconnect between fine-grained knowledge and other vision benchmarks. Through a series of ablation experiments, we find that using a better LLM improves all benchmark scores equally, while a better vision encoder disproportionately improves fine-grained classification performance. Furthermore, we find that better pretraining data is also vital to fine-grained performance, particularly when the language model weights are unfrozen during pre-training.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 23693
Loading