Abstract: We propose a novel multimodal architecture for Scene
Text Visual Question Answering (STVQA), named LayoutAware Transformer (LaTr). The task of STVQA requires
models to reason over different modalities. Thus, we first
investigate the impact of each modality, and reveal the importance of the language module, especially when enriched
with layout information. Accounting for this, we propose a
single objective pre-training scheme that requires only text
and spatial cues. We show that applying this pre-training
scheme on scanned documents has certain advantages over
using natural images, despite the domain gap. Scanned
documents are easy to procure, text-dense and have a variety of layouts, helping the model learn various spatial cues
(e.g. left-of, below etc.) by tying together language and
layout information. Compared to existing approaches, our
method performs vocabulary-free decoding and, as shown,
generalizes well beyond the training vocabulary. We further
demonstrate that LaTr improves robustness towards OCR
errors, a common reason for failure cases in STVQA. In
addition, by leveraging a vision transformer, we eliminate
the need for an external object detector. LaTr outperforms
state-of-the-art STVQA methods on multiple datasets. In
particular, +7.6% on TextVQA, +10.8% on ST-VQA and
+4.0% on OCR-VQA (all absolute accuracy numbers).
0 Replies
Loading