ViOCRVQA: novel benchmark dataset and VisionReader for visual question answering by understanding Vietnamese text in images

Published: 01 Jan 2025, Last Modified: 07 May 2025Multim. Syst. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Optical Character Recognition-Visual Question Answering (OCR-VQA) is the task of answering text information contained in images that have been significantly developed in the English language in recent years. However, there are limited studies of this task in low-resource languages such as Vietnamese. To this end, we introduce a novel dataset, ViOCRVQA (Vietnamese Optical Character Recognition-Visual Question Answering dataset), consisting of 28,000+ images and 120,000+ question-answer pairs. In this dataset, all the images contain text and questions about the information relevant to the text in the images. We deploy ideas from state-of-the-art methods proposed for English to conduct experiments on our dataset, revealing the challenges and difficulties inherent in a Vietnamese dataset. Furthermore, we introduce a novel approach, called VisionReader, which achieved 41.16% in EM and 69.90% in the F1-score on test dataset. The results showed that the OCR system plays an important role in VQA models on the ViOCRVQA dataset. In addition, the objects in the image also play a role in improving model performance. We open access to our dataset at https://github.com/qhnhynmm/ViOCRVQA.git for further research in OCR-VQA task in Vietnamese. The code for the proposed method, along with the models utilized in the experimental evaluation, is available at the following https://github.com/minhquan6203/VisionReader.git.
Loading