Keywords: Credit Attribution, Algorithmic Stability, Stable Sample Compression
TL;DR: We study credit attribution by machine learning algorithms via new relaxations of Differential Privacy that specifically weaken the stability guarantees for a designated subset of $k$ datapoints.
Abstract: Credit attribution is crucial across various fields. In academic research, proper citation acknowledges prior work and establishes original contributions. Similarly, in generative models, such as those trained on existing artworks or music, it is important to ensure that any generated content influenced by these works appropriately credits the original creators.
We study credit attribution by machine learning algorithms. We propose new definitions--relaxations of Differential Privacy--that weaken the stability guarantees for a designated subset of $k$ datapoints. These $k$ datapoints can be used non-stably with permission from their owners, potentially in exchange for compensation. Meanwhile, the remaining datapoints are guaranteed to have no significant influence on the algorithm's output.
Our framework extends well-studied notions of stability, including Differential Privacy ($k = 0$), differentially private learning with public data (where the $k$ public datapoints are fixed in advance),
and stable sample compression (where the $k$ datapoints are selected adaptively by the algorithm).
We examine the expressive power of these stability notions within the PAC learning framework, provide a comprehensive characterization of learnability for algorithms adhering to these principles, and propose directions and questions for future research.
Primary Area: Learning theory
Submission Number: 11781
Loading