Abstract: Questions that require counting a variety of objects in images remain a major challenge in visual question answering (VQA). The most common approaches to VQA involve either classifying answers based on fixed length representations of both the image and question or summing fractional counts estimated from each section of the image. In contrast, we treat counting as a sequential decision process and force our model to make discrete choices of what to count. Specifically, the model sequentially selects from detected objects and learns interactions between objects that influence subsequent selections. A distinction of our approach is its intuitive and interpretable output, as discrete counts are automatically grounded in the image. Furthermore, our method outperforms the state of the art architecture for VQA on multiple metrics that evaluate counting.
TL;DR: We perform counting for visual question answering; our model produces interpretable outputs by counting directly from detected objects.
Keywords: Counting, VQA, Object detection
Data: [HowMany-QA](https://paperswithcode.com/dataset/howmany-qa), [MS COCO](https://paperswithcode.com/dataset/coco), [Visual Genome](https://paperswithcode.com/dataset/visual-genome), [Visual Question Answering](https://paperswithcode.com/dataset/visual-question-answering), [Visual Question Answering v2.0](https://paperswithcode.com/dataset/visual-question-answering-v2-0)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/interpretable-counting-for-visual-question/code)
10 Replies
Loading