Robots Enact Malignant StereotypesDownload PDF

11 May 2022 (modified: 22 Oct 2023)L-DOD 2022Readers: Everyone
Keywords: Robotics, Algorithmic Audit, Gender, Classification, machine learning, artificial intelligence, deep learning, bias, discrimination, justice, computer vision, imitation learning, learning from demonstration, robotic manipulation
TL;DR: Robotic AI systems enact race, gender, scientifically-discredited physiognomy, and other harmful stereotypes. Merely correcting disparities will be insufficient to address the problem.
Abstract: Stereotypes, bias, and discrimination have been extensively documented in Machine Learning (ML) methods such as Computer Vision (CV) [18, 80], Natural Language Processing (NLP) [6], or both, in the case of large image and caption models such as OpenAI CLIP [14]. In this paper, we evaluate how ML bias manifests in robots that physically and autonomously act within the world. We audit one of several recently published CLIP-powered robotic manipulation methods, presenting it with objects that have pictures of human faces on the surface which vary across race and gender, alongside task descriptions that contain terms associated with common stereotypes. Our experiments definitively show robots acting out toxic stereotypes with respect to gender, race, and scientifically-discredited physiognomy, at scale. Furthermore, the audited methods are less likely to recognize Women and People of Color. Our interdisciplinary sociotechnical analysis synthesizes across fields and applications such as Science Technology and Society (STS), Critical Studies, History, Safety, Robotics, and AI. We find that robots powered by large datasets and Dissolution Models (sometimes called “foundation models”, e.g. CLIP) that contain humans risk physically amplifying malignant stereotypes in general; and that merely correcting disparities will be insufficient for the complexity and scale of the problem. Instead, we recommend that robot learning methods that physically manifest stereotypes or other harmful outcomes be paused, reworked, or even wound down when appropriate, until outcomes can be proven safe, effective, and just. Finally, we discuss comprehensive policy changes and the potential of new interdisciplinary research on topics like Identity Safety Assessment Frameworks and Design Justice to better understand and address these harms. Website: https://sites.google.com/view/robots-enact-stereotypes/home
Dual Submission: Accepted to ACM Conference on Fairness, Accountability, and Transparency 2022.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2207.11569/code)
0 Replies

Loading