Guidelines for Whom? Rethinking AI Ethics in Resource-Constrained Migration Services

Published: 29 Apr 2026, Last Modified: 29 Apr 2026Eval Eval @ ACL 2026 PosterEveryoneRevisionsCC BY 4.0
Keywords: AI ethics, humanitarian AI, language access, refugee services, AI governance, realist evaluation, machine translation, responsible AI, resource-constrained organizations, informal AI adoption
TL;DR: AI ethics guidelines designed for well-resourced institutions are inaccessible to refugee-serving nonprofits, driving informal AI adoption without oversight and undermining intended protections.
Abstract: Responsible AI principles have had limited influence on practice in humanitarian settings. A growing body of published guidelines now governs AI and data use in these contexts, responding to documented risks including surveillance, data misuse, and discriminatory outcomes affecting refugee populations. For high-risk applications such as biometric identification and asylum adjudication, such guidelines address genuine and serious concerns. Many differentiate risk tiers in principle, yet the compliance expectations they establish—staff capacity, technical infrastructure, formal evaluation—reflect the organizational contexts in which such guidelines are most often developed. Across the humanitarian sector, however, deploying organizations vary considerably in resources, and many of the nonprofits providing frontline services to refugees operate with limited administrative capacity. When compliance requirements exceed what these organizations can reasonably meet, formal AI adoption stalls while informal adoption proceeds without oversight or recourse. Current guidelines also tend to treat non-adoption as a neutral default, without accounting for the service gaps that follow when AI-assisted language access is unavailable. Drawing on collaboration with refugee-serving practitioners, we show that this gap between governance design and organizational reality has real consequences for the people these guidelines are meant to protect. We argue that evaluating AI guidelines requires the same realist logic that evaluation research has long applied to social programs: not "does this guideline exist?" but "for which deployers, under what conditions, and does it produce its intended protective outcomes?"
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public in the event of acceptance.
Submission Type: Provocation
Archival Status: Archival
Submission Number: 8
Loading