Track: Long Paper Track (up to 9 pages)
Keywords: algorithmic fairness, generative AI, language models, text-to-image, evaluation, AI regulation, algorithmic discrimination, anti-discrimination law
TL;DR: We connect the legal and technical literature on GenAI bias evaluation, identify areas of misalignment, and illustrate through four case studies how this misalignment can yield discriminatory outcomes in real-world deployments.
Abstract: Generative AI (GenAI) models present new challenges in regulating against discriminatory behavior. We argue that GenAI fairness research still has not met these challenges; instead, a significant gap remains between bias assessment methods and regulatory goals. This leads to ineffective regulation that can allow deployment of reportedly fair, yet actually discriminatory, GenAI systems. Towards remedying this problem, we connect the legal and technical literature around GenAI bias evaluation and identify areas of misalignment. Through four case studies, we demonstrate how this misalignment can result in discriminatory outcomes in real-world deployments, especially in adaptive or complex environments. We offer practical recommendations for improving discrimination testing to better align with regulatory goals and enhance the reliability of fairness assessments in the future.
Submission Number: 18
Loading