Influence Based Approaches to Algorithmic Fairness: A Closer Look

Published: 27 Oct 2023, Last Modified: 22 Nov 2023NeurIPS XAIA 2023EveryoneRevisionsBibTeX
TL;DR: We take a closer look at approaches for editing pre-trained models by reweighing training instances via influence functions
Abstract: Off-the-shelf pre-trained models are increasingly common in machine learning. When deployed in the real world, it is essential that such models are not just accurate but also demonstrate qualities like fairness. This paper takes a closer look at recently proposed approaches that edit a pre-trained model for group fairness by re-weighting the training data. We offer perspectives that unify disparate weighting schemes from past studies and pave the way for new weighting strategies to address group fairness concerns.
Submission Track: Full Paper Track
Application Domain: None of the above / Not applicable
Clarify Domain: XAI for auditing and editing models for fairness
Survey Question 1: Our work takes a closer look at approaches for algorithmic fairness that use influence functions to understand how individual training instances affect the fairness properties of a trained model. We also study how the training data can be appropriately reweighted to produce a fair and accurate model.
Survey Question 2: Explainability is a nice by-product for us. The primary goal is to edit models so that they are made more fair but retain accuracy. Our approach of reweighting data based on influence scores achieves this fairness / accuracy tradeoff (and we provide some insights into why in the paper).
Survey Question 3: Influence functions
Submission Number: 11
Loading