Long-Term Impacts of Model Retraining with Strategic Feedback

15 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: societal considerations including fairness, safety, privacy
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Strategic Classification
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: This paper studies the dynamics of welfare and fairness where strategic agents interact with an ML system retrained over time with model-annotated and human-annotated samples.
Abstract: When machine learning (ML) models need to be frequently retrained, it is often too expensive to obtain *human-annotated* samples, so recent ML models have started to label samples by themselves. This paper studies a setting where an ML model is retrained (with *human* and *model-annotated* samples) over time to make decisions about a sequence of *strategic* human agents who can adapt their behaviors in response to the most recent ML model. We aim to investigate what happens when *model-annotated* data are generated under the agents' strategic feedback and how the models retrained with such data can be affected. Specifically, we first formalize the interactions between agents and the ML system and then analyze how the agents and ML models evolve under such dynamic interactions. We find that as the model gets retrained, agents are increasingly likely to receive positive decisions, whereas the proportion of agents with positive labels may decrease over time. We thus propose an approach to stabilize the dynamics and show how this method can further be leveraged to enhance algorithmic fairness when agents come from multiple social groups. Experiments on synthetic/semi-synthetic and real data validate the theoretical findings.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
Supplementary Material: zip
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 463
Loading