Abstract: Algorithmic fairness has been receiving increasing attention in recent years. Among others, individual fairness, with its root in the dictionary definition of fairness, offers a fine-grained fairness notion. At the algorithmic level, individual fairness can often be operationalized as a convex regularization term with respect to a similarity matrix. Appealing as it might be, a notorious challenge of individual fairness lies in how to find appropriate distance or similarity measure, which largely remains open to date. Consequently, the similarity or distance measure used in almost any individually fair algorithm is likely to be imperfect due to various reasons such as imprecise prior/domain knowledge, noise, or even adversaries. In this paper, we take an important step towards resolving this fundamental challenge and ask: how sensitive is the individually fair learning algorithm with respect to the given similarities? How can we make the learning results robust with respect to the imperfection of the given similarity measure? First (Soul-M), we develop a sensitivity measure to characterize how the learning outcomes of an individually fair learning algorithm change in response to the change of the given similarity measure. Second (Soul-A ), based on the proposed sensitive measure, we further develop a robust individually fair algorithm by adversarial learning that optimizes the similarity matrix to defend against L_∞ attack. A unique advantage of our sensitivity measure and robust algorithm lies in that they are applicable to a broad range of learning models as long as the objective function is twice differentiable. We conduct extensive experiments to demonstrate the efficacy of our methods.
Loading