REVIVE: Regional Visual Representation Matters in Knowledge-Based Visual Question AnsweringDownload PDF

Published: 31 Oct 2022, Last Modified: 20 Apr 2023NeurIPS 2022 AcceptReaders: Everyone
Keywords: Knowledge-based VQA
TL;DR: We revisit visual representation in knowledge-based VQA, and propose a new method called REVIVE, which achieves new state-of-the-art performances on OK-VQA dataset.
Abstract: This paper revisits visual representation in knowledge-based visual question answering (VQA) and demonstrates that using regional information in a better way can significantly improve the performance. While visual representation is extensively studied in traditional VQA, it is under-explored in knowledge-based VQA even though these two tasks share the common spirit, i.e., rely on visual input to answer the question. Specifically, we observe in most state-of-the-art knowledge-based VQA methods: 1) visual features are extracted either from the whole image or in a sliding window manner for retrieving knowledge, and the important relationship within/among object regions is neglected; 2) visual features are not well utilized in the final answering model, which is counter-intuitive to some extent. Based on these observations, we propose a new knowledge-based VQA method REVIVE, which tries to utilize the explicit information of object regions not only in the knowledge retrieval stage but also in the answering model. The key motivation is that object regions and inherent relationship are important for knowledge-based VQA. We perform extensive experiments on the standard OK-VQA dataset and achieve new state-of the-art performance, i.e., 58.0 accuracy, surpassing previous state-of-the-art method by a large margin (+3.6%). We also conduct detailed analysis and show the necessity of regional information in different framework components for knowledge-based VQA. Code is publicly available at https://github.com/yzleroy/REVIVE.
Supplementary Material: pdf
21 Replies

Loading