Abstract: Visual question answering (VQA) is a challenging task that requires a deep understanding of language and images. Currently, most VQA algorithms focus on finding the correlations between basic question embeddings and image features by using an element-wise product or bilinear pooling between these two vectors. Some algorithms also use attention models to extract features. In this paper, deeper analyses of these attention features are enabled by capturing their importance by weighting their contextual information. A novel interpretable VQA system leveraging weighted attention contextual features (WACF) is proposed for VQA tasks. This is a multimodal system which can assign adaptive weights to the features of questions and images themselves and to their contextual features based on their importance. Our new model yields state-of-the-art results on the MS COCO VQA datasets for open-ended question tasks.
0 Replies
Loading