Towards Cross-Lingual Explanation of Artwork in Large-scale Vision Language Models

ACL ARR 2024 August Submission416 Authors

16 Aug 2024 (modified: 18 Sept 2024)ACL ARR 2024 August SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: As the performance of Large-scale Vision Language Models (LVLMs) improves, they are increasingly capable of responding in multiple languages, and there is an expectation that the demand for explanations generated by LVLMs will grow. However, pre-training of Vision Encoder and the integrated training of LLMs with Vision Encoder are mainly conducted using English training data, leaving it uncertain whether LVLMs can completely handle their potential when generating explanations in languages other than English. In addition, multilingual QA benchmarks that create datasets using machine translation have cultural differences and biases, remaining issues for use as evaluation tasks. To address these challenges, this study created an extended dataset in multiple languages without relying on machine translation. This dataset that takes into account nuances and country-specific phrases was then used to evaluate the generation explanation abilities of LVLMs. Furthermore, this study examined whether Instruction-Tuning in resource-rich English improves performance in other languages. Our findings indicate that LVLMs perform worse in languages other than English compared to English. In addition, it was observed that LVLMs struggle to effectively manage the knowledge learned from English data.
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: Multilingual,Multimodal,Artwork explanation, Vision & Language
Contribution Types: Model analysis & interpretability, Data resources, Data analysis
Languages Studied: Chinese,Dutch,English,French,German,Italian,Japanese,Russian,Spanish,Swedish
Submission Number: 416
Loading