Abstract: Large Vision-Language Models (LVLMs) have demonstrated remarkable capabilities in various tasks, yet they exhibit significant performance disparities across languages, particularly in multilingual and multimodal scenarios. Existing multilingual benchmarks suffer from limitations including corpus-specific content biases, disjointed multimodal input formats, and a lack of safety evaluation. To address these gaps, we propose PM4Bench, the first Parallel Multilingual Multi-Modal Multi-task Benchmark for LVLMs. PM4Bench features a parallel corpus design across 10 languages, enabling fair and accurate cross-lingual comparisons. It includes a vision setting where text and queries are embedded in images, requiring LVLMs to simultaneously "see," "read," and "think," aligning with real-world applications. Additionally, PM4Bench incorporates safety evaluations, addressing critical oversight in existing multilingual benchmarks. Using PM4Bench, we evaluate 11 mainstream LVLMs, revealing significant cross-linguistic performance disparities, particularly in vision settings, and identifying OCR capability as a key determinant of these imbalances. We will release PM4Bench at GitHub.
Warning: This paper contains potentially offensive and harmful text.
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: Resources and Evaluation
Contribution Types: Data resources, Data analysis
Languages Studied: English, Chinese, Korean, Thai, Vietnamese, Russian, Hungarian, Serbian, Czech, Arabic
Submission Number: 7318
Loading