Keywords: vision-language model, visual reasoning, zero-shot, VLM scaling
TL;DR: we implement 50+ vision-language model benchmarks in a unified codebase revealing the limits of scaling for reasoning, relational understanding, and even basic counting/digit recognition tasks.
Abstract: Significant research efforts have been made to scale and improve vision-language model (VLM) training approaches.
Yet, with an ever-growing number of benchmarks,
researchers are tasked with the heavy burden of implementing each protocol, bearing a non-trivial computational cost, and making sense of how all these benchmarks translate into meaningful axes of progress.
To facilitate a systematic evaluation of VLM progress, we introduce UniBench: a unified implementation of 50+ VLM benchmarks spanning a range of carefully categorized vision-centric capabilities from object recognition to spatial awareness, counting, and much more. We showcase the utility of UniBench for measuring progress by evaluating nearly 60 publicly available vision-language models, trained on scales of up to 12.8B samples. We find that while scaling training data or model size can boost many vision-language model capabilities, scaling offers little benefit for reasoning or relations. Surprisingly, we also discover today's best VLMs struggle on simple digit recognition and counting tasks, e.g. MNIST, which much simpler networks can solve. Where scale falls short, we find that more precise interventions, such as data quality or tailored-learning objectives offer more promise. For practitioners, we also offer guidance on selecting a suitable VLM for a given application. Finally, we release an easy-to-run UniBench code-base with the full set of 50+ benchmarks and comparisons across 59 models as well as a distilled, representative set of benchmarks that runs in 5 minutes on a single GPU. UniBench with model evaluations on all benchmarks are provided as a toolbox at: https://github.com/facebookresearch/unibench
Supplementary Material: pdf
Submission Number: 2130
Loading