How Transferable are Reasoning Patterns in VQA?Download PDF

09 Dec 2020 (modified: 08 Apr 2021)OpenReview Anonymous Preprint Blind SubmissionReaders: Everyone
Keywords: Visual Question Answering, Reasoning
TL;DR: We introduce an in-depth analysis of reasoning patterns at works in Transformer-based VQA models, and propose to transfer these patterns from an oracle to a noisy visual model. An interactive tool is provided here: https://reasoningpatterns.github.io
Abstract: Since its inception, Visual Question Answering (VQA) is notoriously known as a task, where models are prone to exploit biases in datasets to find shortcuts instead of performing high-level reasoning. Classical methods address this by removing biases from training data, or adding branches to models to detect and remove biases. In this paper, we argue that uncertainty in vision is a dominating factor preventing the successful learning of reasoning in vision and language problems. We train a visual oracle and in a large scale study provide experimental evidence that it is much less prone to exploiting spurious dataset biases compared to standard models. We propose to study the attention mechanisms at work in the visual oracle and compare them with a SOTA Transformer-based model. We provide an in-depth analysis and visualizations of reasoning patterns obtained with an online visualization tool which we make publicly available (https://reasoningpatterns.github.io). We exploit these insights by transferring reasoning patterns from the oracle to a SOTA Transformer-based VQA model taking standard noisy visual inputs via fine-tuning. In experiments we report higher overall accuracy, as well as accuracy on infrequent answers for each question type, which provides evidence for improved generalization and a decrease of the dependency on dataset biases.
0 Replies

Loading