LLM Targeted Underperformance Disproportionately Impacts Vulnerable Users

Published: 20 Oct 2024, Last Modified: 14 Nov 2024SafeGenAi PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: model bias/fairness evaluation, Ethical implications of deploying generative AI
TL;DR: State-of-the-art large language models systematically underperform for users with lower English proficiency, less education, and from non-US origins, exacerbating model reliability issues for these vulnerable groups.
Abstract: While state-of-the-art Large Language Models (LLMs) have shown impressive performance on many tasks, systematically evaluating undesirable behaviors of generative models remains critical. In this work, we visit the notion of ethics and bias in terms of how model behavior changes depending on three user traits: English proficiency, education level, and country of origin. We evaluate how fairly LLMs respond to different users in terms of information accuracy, truthfulness, and refusals. We present extensive experimentation on three state-of-the-art LLMs and two different datasets targeting truthfulness and factuality. Our findings suggest that undesirable behaviors occur disproportionately more for users with lower English proficiency, of lower education status, and originating from outside the US, rendering these models unreliable sources of information towards their most vulnerable users.
Submission Number: 12
Loading