Addressing Fairness in Classification with a Model-Agnostic Multi-Objective Algorithm

14 Oct 2021OpenReview Archive Direct UploadReaders: Everyone
Abstract: The goal of fairness in classification is to learn a classifier that does not discriminate against groups of individuals based on sensitive attributes, such as race and gender. One approach to designing fair al- gorithms is to use relaxations of fairness notions as regularization terms or in a constrained optimiza- tion problem. We observe that the hyperbolic tan- gent function can approximate the indicator func- tion. We leverage this property to define a differen- tiable relaxation that approximates fairness notions provably better than existing relaxations. In addi- tion, we propose a model-agnostic multi-objective architecture that can simultaneously optimize for multiple fairness notions and multiple sensitive attributes and supports all statistical parity-based notions of fairness. We use our relaxation with the multi-objective architecture to learn fair clas- sifiers. Experiments on public datasets show that our method suffers a significantly lower loss of ac- curacy than current debiasing algorithms relative to the unconstrained model.
0 Replies

Loading