Primary Area: datasets and benchmarks
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Vision-language Pre-training, Multimodality, Benchmark, Dataset
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: a new benchmark for vision-language pre-training
Abstract: Large vision-language models have recently achieved remarkable progress,
exhibiting great perception and reasoning abilities concerning visual information.
However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future development in this domain.
Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics.
Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias.
In response to these challenges, we propose MMBench, a new benchmark for assessing multi-modal capabilities of VLMs.
MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two key features: 1. MMBench is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and the variety of evaluation questions and abilities; 2. MMBench introduces a rigorous CircularEval strategy and incorporates the use of ChatGPT to convert free-form predictions into pre-defined choices, thereby facilitating a fair and robust evaluation despite of VLMs' different instruction following capabilities.
MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models.
We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 502
Loading