Interpretable Counting for Visual Question AnsweringDownload PDF

15 Feb 2018 (modified: 04 Jun 2023)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Keywords: Counting, VQA, Object detection
TL;DR: We perform counting for visual question answering; our model produces interpretable outputs by counting directly from detected objects.
Abstract: Questions that require counting a variety of objects in images remain a major challenge in visual question answering (VQA). The most common approaches to VQA involve either classifying answers based on fixed length representations of both the image and question or summing fractional counts estimated from each section of the image. In contrast, we treat counting as a sequential decision process and force our model to make discrete choices of what to count. Specifically, the model sequentially selects from detected objects and learns interactions between objects that influence subsequent selections. A distinction of our approach is its intuitive and interpretable output, as discrete counts are automatically grounded in the image. Furthermore, our method outperforms the state of the art architecture for VQA on multiple metrics that evaluate counting.
Data: [HowMany-QA](https://paperswithcode.com/dataset/howmany-qa), [COCO](https://paperswithcode.com/dataset/coco), [Visual Genome](https://paperswithcode.com/dataset/visual-genome), [Visual Question Answering](https://paperswithcode.com/dataset/visual-question-answering), [Visual Question Answering v2.0](https://paperswithcode.com/dataset/visual-question-answering-v2-0)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:1712.08697/code)
10 Replies

Loading