Improving Fairness without Demographic by Lagged Dynamic GroupingDownload PDF

Anonymous

16 Jan 2022 (modified: 05 May 2023)ACL ARR 2022 January Blind SubmissionReaders: Everyone
Abstract: Machine learning models are prone to social biases in datasets and thus could make discriminatory decisions against demographic minority groups. Most existing fairness-promoting methods usually assume access to the annotations of the demographic information. However, such information could be inaccessible due to the high data annotation cost and privacy restrictions. Recently, distributionally robust optimization (DRO) techniques have been applied to promote fairness without demographic labels. DRO-based methods optimize the individuals/groups with the worst prediction performance, with the intuition that these groups roughly correspond to the minority groups being biased against. However, in complex real-world settings with multiple strong bias attributes, the simple grouping schemes in the existing DRO-based methods can fail to identify the ground truth minority groups. In this paper, we propose FreeDRO, a demographic-free group DRO method featuring a more principled grouping scheme, call lagged dynamic grouping. Specifically, FreeDRO dynamically splits the training data based on the ground truth labels and the prediction of the model at an earlier iteration and then optimizes worst group performance. Extensive experiments on five real-world datasets show that our method can effectively alleviate the biases and even achieve comparable results with methods with full demographic annotations. The results also verify that our grouping scheme has a good correspondence with the ground truth demographic grouping.
Paper Type: long
0 Replies

Loading