What Is Missing in Multilingual Visual Reasoning and How to Fix It

ACL ARR 2024 April Submission868 Authors

16 Apr 2024 (modified: 15 May 2024)ACL ARR 2024 April SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: NLP models today strive for supporting multiple languages and modalities, improving accessibility for diverse users. In this paper, we evaluate their multilingual, multimodal capabilities by testing on a visual reasoning task. We observe that proprietary systems like GPT-4V obtain the best performance on this task now, but open models lag in comparison. Surprisingly, GPT-4V exhibits similar performance between English and other languages, indicating the potential for equitable system development across languages. Our analysis on model failures reveals three key aspects that make this task challenging: multilinguality, complex reasoning, and multimodality. To address these challenges, we propose three targeted interventions including a translate-test approach to tackle multilinguality, a visual programming approach to break down complex reasoning, and a method that leverages image captioning to address multimodality. Our interventions achieve the \emph{best} open performance on this task in a zero-shot setting, boosting open models LLaVA-v1.5-13B by 13.4\%, LLaVA-v1.6-34B by 20.3\%, and Qwen-VL by 16.7\%, while also minorly improving GPT-4V's performance.
Paper Type: Long
Research Area: Multilingualism and Cross-Lingual NLP
Research Area Keywords: multilingual evaluation,multilingualism,cross-lingual transfer,less-resourced languages
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Reproduction study, Approaches to low-resource settings, Data analysis
Languages Studied: English, Indonesian, Swahili, Tamil, Turkish, Mandarin Chinese
Submission Number: 868
Loading