WinoGAViL: Gamified Association Benchmark to Challenge Vision-and-Language ModelsDownload PDF

06 Jun 2022, 00:52 (modified: 11 Oct 2022, 11:51)NeurIPS 2022 Datasets and Benchmarks Readers: Everyone
Keywords: vision-and-language, dynamic-benchmark, visual-associations, visual-common-sense-reasoning
TL;DR: We introduce WinoGAViL: an online game to collect vision-and-language associations, used as a dynamic benchmark to evaluate state-of-the-art models.
Abstract: While vision-and-language models perform well on tasks such as visual question answering, they struggle when it comes to basic human commonsense reasoning skills. In this work, we introduce WinoGAViL: an online game of vision-and-language associations (e.g., between werewolves and a full moon), used as a dynamic evaluation benchmark. Inspired by the popular card game Codenames, a spymaster gives a textual cue related to several visual candidates, and another player tries to identify them. Human players are rewarded for creating associations that are challenging for a rival AI model but still solvable by other human players. We use the game to collect 3.5K instances, finding that they are intuitive for humans (>90% Jaccard index) but challenging for state-of-the-art AI models, where the best model (ViLT) achieves a score of 52%, succeeding mostly where the cue is visually salient. Our analysis as well as the feedback we collect from players indicate that the collected associations require diverse reasoning skills, including general knowledge, common sense, abstraction, and more. We release the dataset, the code and the interactive game, allowing future data collection that can be used to develop models with better association abilities.
Supplementary Material: pdf
Dataset Url:
License: Code ( is licensed under the MIT license Dataset ( is licensed under CC-BY 4.0 license
Author Statement: Yes
Contribution Process Agreement: Yes
In Person Attendance: Yes
16 Replies