Empowering Domain Experts to Detect Social Bias in Generative AI with User-Friendly Interfaces

Published: 27 Oct 2023, Last Modified: 22 Nov 2023NeurIPS XAIA 2023EveryoneRevisionsBibTeX
TL;DR: This work proposes the design and implementation of user-friendly interfaces to further empower social domain experts to detect and quantify social bias in generative AI models.
Abstract: Generative AI models have become vastly popular and drive advances in all aspects of the modern economy. Detecting and quantifying the implicit social biases that they inherit in training, such as racial and gendered biases, is a critical first step in avoiding discriminatory outcomes. However, current methods are difficult to use and inflexible, presenting an obstacle for domain experts such as social scientists, ethicists, and gender studies experts. We present two comprehensive open-source bias testing tools (BiasTestGPT for PLMs and BiasTestVQA for VQA models) hosted on HuggingFace to address this challenge. With these tools, we provide intuitive and flexible tools for social bias testing in generative AI models, allowing for unprecedented ease in detecting and quantifying social bias across multiple generative AI models and mediums.
Submission Track: Demo Track
Application Domain: None of the above / Not applicable
Clarify Domain: Social Bias in NLP and Computer Vision models
Survey Question 1: We introduce the design and implementation of user-friendly interfaces that allow for a streamlined and intuitive method of detecting and quantifying social bias in generative AI models, thus empowering domain experts like social scientists and ethics experts to more easily join in the fight against social bias in generative AI. This would greatly enable us to understand what biases are present in such models and how they can be addressed to prevent the dissemination of biases in applications involving generative AI.
Survey Question 2: Many social bias detection and quantification methods simply have the issue of not being very usable, particularly for domain experts - these issues stem from the use of static datasets, lack of GUI, or incompatibility with new generative AI models.
Survey Question 3: Social Bias Metrics - the Stereotype Score from StereoSet (https://aclanthology.org/2021.acl-long.416/) for Text Models and Association Testing (https://www.science.org/doi/10.1126/science.aal4230) adapted to Vision Models.
Submission Number: 54
Loading