Abstract: We introduce the task of Image-Set Visual Question Answer-
ing (ISVQA), which generalizes the commonly studied single-image VQA
problem to multi-image settings. Taking a natural language question and
a set of images as input, it aims to answer the question based on the con-
tent of the images. The questions can be about objects and relationships
in one or more images or about the entire scene depicted by the image set.
To enable research in this new topic, we introduce two ISVQA datasets
– indoor and outdoor scenes. They simulate the real-world scenarios of
indoor image collections and multiple car-mounted cameras, respectively.
The indoor-scene dataset contains 91,479 human-annotated questions for
48,138 image sets, and the outdoor-scene dataset has 49,617 questions
for 12,746 image sets. We analyze the properties of the two datasets, in-
cluding question-and-answer distributions, types of questions, biases in
dataset, and question-image dependencies. We also build new baseline
models to investigate new research challenges in ISVQA.
0 Replies
Loading