A Stereotype Content Analysis on Color-related Social Bias in Large Vision Language Models

ACL ARR 2026 January Submission903 Authors

26 Dec 2025 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LVLM stereotypes, Psychology-based evaluation metric, Color-related stereotypes
Abstract: As large vision language models (LVLMs) rapidly advance, concerns about the risk of their learning or generating stereotypes are increasing. However, previous studies on LVLM's stereotypes face two primary limitations: metrics that overlooked the importance of content words, and datasets that overlooked the effect of color. To address these limitations, this study introduces new evaluation metrics based on the Stereotype Content Model (SCM). The SCM-based metric is grounded in social psychology, enabling the detection of stereotypical content words in model responses along the dimensions of competence and warmth. We also propose BASIC, a benchmark for assessing gender, race and color stereotypes. Using SCM metrics and BASIC, we conduct a study with eight LVLMs to discover stereotypes. As a result, we found three findings. (1) The SCM-based evaluation is effective in capturing stereotypes. (2) LVLMs exhibit color stereotypes in the output along with gender and race ones. (3) Interaction between model architecture and parameter sizes seems to affect stereotypes. We release BASIC publicly on [anonymized for review].
Paper Type: Long
Research Area: Computational Social Science, Cultural Analytics, and NLP for Social Good
Research Area Keywords: NLP tools for social analysis, model bias/fairness evaluation, multimodal applications, evaluation methodologies
Contribution Types: Model analysis & interpretability, Data resources, Data analysis
Languages Studied: English
Submission Number: 903
Loading