Knowledge Graphical Representation and Evaluation of Social Perception and Bias in Text-to-Image Models

Published: 03 Sept 2025, Last Modified: 03 Sept 2025LMKR-TrustAIEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Knowledge Graph, Social Bias Evaluation, Text-to-Image, Multimodal Fairness
TL;DR: This paper introduces SocialBiasKG (human perception) and ModelBiasKG (model outputs) frameworks to evaluate and analyze occupation–race bias in T2Is, highlighting systemic underrepresentation and differences in bias levels.
Abstract: Text-to-Image (T2I) models have advanced rapidly, capable of generating high-quality images from natural language prompts; yet, T2I outputs often expose social biases—especially concerning demographic lines such as occupation and race. This certainly raises concerns about the fairness and trustworthiness of T2I. While current evaluations mainly rely on statistical disparity measures, they often overlook the connection to social acceptance and normative expectations. To create a socially grounded framework, we introduce SocialBiasKG (human perception), a structured knowledge graph that captures social nuances in occupation–race bias, including global taxonomy-based directed edges—Stereotype, Association, Dominance, and Underrepresentation. We develop (1) a comprehensive bias evaluation dataset and (2) a detailed protocol customized for each edge type and direction. The evaluation metrics include style similarity, representational bias, and image quality, which are applied to ModelBiasKG (model outputs). This allows for systematic comparisons across models and against human-annotated SocialBiasKG, revealing whether T2I models reproduce, distort, or diverge from cultural norms. We demonstrate that our KG-based framework effectively detects nuanced, socially important biases and highlights key gaps between human perceptions and model behavior. Our approach offers a socially grounded, interpretable, and extendable method for evaluating bias in generative vision models.
Submission Number: 6
Loading