Algorithmic Exclusions, Collective Inclusions: Surveying Algorithmic Harms and Collective Action in LGBTQIA2S+ and Marginalized Communities

13 Apr 2026 (modified: 27 Apr 2026)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: LGBTQIA2S+ and marginalized communities often face harms when interacting with algorithmic systems, such as misgendering, content suppression, and other forms of exclusion. In this paper, we examine the social context that enables these harms in LGBTQIA2S+ spaces, summarize the existing literature on algorithmic harms, and explore how communities can leverage collective action to regain their agency. We survey methods that exploit properties of machine learning systems, such as data dependence, adversarial vulnerability to resist these harms through collective action. We categorize existing approaches to resisting these harms, organized by four collective motivations: reporting and contesting harms (Model Auditing and Challenging Algorithmic Decision Strategies), opting out of model training or decision-making (Algorithmic Opt-Out Strategies), actively intervening to shift model behavior (Collective Intervention Strategies), and seeking recommendations for favorable outcomes (Decision Modification and Recommendation Strategies). Through a mapping review, we systematically chart where LGBTQIA2S+ and other marginalized communities appear in this literature and where they are absent. Our mapping reveals that while these communities are well-represented in platform-based strategies such as folk theorization and data activism, they are nearly absent from model-based methods such as adversarial techniques and algorithmic collective action, where machine learning researchers have focused their efforts. These gaps highlight opportunities for ML researchers and developers to build community-focused tools and methods that enable collectives to coordinate responses to algorithmic harms and regain agency over the systems that affect them. By mapping resistance methods across data-based, model-based, and platform-based mechanisms, we identify where the current literature falls short in supporting the communities most affected by algorithmic harms.
Submission Type: Long submission (more than 12 pages of main content)
Assigned Action Editor: ~Lu_Zhang3
Submission Number: 8402
Loading