Influence-based Attributions can be Manipulated

NeurIPS 2024 Workshop ATTRIB Submission10 Authors

Published: 30 Oct 2024, Last Modified: 14 Jan 2025ATTRIB 2024EveryoneRevisionsBibTeXCC BY 4.0
Release Opt Out: No, I don't wish to opt out of paper release. My paper should be released.
Keywords: Influence Functions, Explanations, data valuation, Attribution, adversarial, attack
Abstract: Influence Functions are a standard tool for attributing predictions to training data in a principled manner and are widely used in applications such as data valuation and fairness. In this work, we present realistic incentives to manipulate influence-based attributions and investigate whether these attributions can be \textit{systematically} tampered by an adversary. We show that this is indeed possible for logistic regression models trained on ResNet feature embeddings and standard tabular fairness datasets and provide efficient attacks with backward-friendly implementations. Our work raises questions on the reliability of influence-based attributions in adversarial circumstances. Code will be available at : \url{https://github.com/infinite-pursuits/influence-based-attributions-can-be-manipulated}.
Submission Number: 10
Loading