Fairness for the People, by the People: Minority Collective Action

03 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Colletive action, Fairness
Abstract: Machine learning models often preserve biases present in training data, leading to unfair treatment of certain minority groups. Despite an array of existing firm-side bias mitigation techniques, they typically incur utility costs and require organizational buy-in. Recognizing that many such models rely on user-contributed data, we introduce a framework for minority Algorithmic Collective Action, where a coordinated minority group strategically relabels its own data to enhance fairness, without altering the firm’s training process. We propose three practical, model-agnostic strategies to approximate ideal relabeling and validate them on five real-world datasets. Our findings show that with limited participation, a collective can substantially reduce unfairness with a small impact on overall accuracy.
Supplementary Material: zip
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 1504
Loading