LinguaMark: Do Multimodal Models Speak Fairly? A Benchmark-Based Evaluation

Published: 24 Sept 2025, Last Modified: 21 Nov 2025NeurIPS 2025 LLM Evaluation Workshop PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Multimodal models; Multilingual; Benchmarks; Evaluation;
TL;DR: Evaluate how multilingual open and closed source models perform on socially relevant images and texts.
Abstract: Large Multimodal Models (LMMs) are typically trained on vast corpora of image-text data but are often limited in linguistic coverage, leading to biased and unfair outputs across languages. While prior work has explored multimodal evaluation, less emphasis has been placed on assessing multilingual capabilities. In this work, we introduce \texttt{LinguaMark}, a benchmark designed to evaluate state-of-the-art LMMs on a multilingual Visual Question Answering (VQA) task. %Our dataset comprises 6,875 image-text pairs spanning 11 languages and 5 social attributes. We evaluate models using three key metrics: Bias, Answer Relevancy, and Faithfulness. Our findings reveal that closed-source models generally achieve the highest overall performance. Both closed-source and open-source models perform competitively across social attributes, and \texttt{Qwen2.5} demonstrates strong generalization across multiple languages. We release our benchmark and evaluation code to encourage reproducibility and further research at this $\href{https://anonymous.4open.science/r/LinguaMark-E008}{link}$.
Submission Number: 64
Loading