Hidden incentives for self-induced distributional shiftDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Blind SubmissionReaders: Everyone
TL;DR: Performance metrics are incomplete specifications; the ends don't always justify the means.
Abstract: Decisions made by machine learning systems have increasing influence on the world. Yet it is common for machine learning algorithms to assume that no such influence exists. An example is the use of the i.i.d. assumption in online learning for applications such as content recommendation, where the (choice of) content displayed can change users' perceptions and preferences, or even drive them away, causing a shift in the distribution of users. Generally speaking, it is possible for an algorithm to change the distribution of its own inputs. We introduce the term self-induced distributional shift (SIDS) to describe this phenomenon. A large body of work in reinforcement learning and causal machine learning aims to deal with distributional shift caused by deploying learning systems previously trained offline. Our goal is similar, but distinct: we point out that changes to the learning algorithm, such as the introduction of meta-learning, can reveal hidden incentives for distributional shift (HIDS), and aim to diagnose and prevent problems associated with hidden incentives. We design a simple  environment as a "unit test" for HIDS, as well as a content recommendation environment which allows us to disentangle different types of SIDS.  We demonstrate the potential for HIDS to cause unexpected or undesirable behavior in these environments, and propose and test a mitigation strategy.
Code: https://anonymous.4open.science/r/66c5e3a4-2a45-4d71-ae58-d097e12ebae1/
Keywords: distributional shift, safety, incentives, specification, content recommendation, reinforcement learning, online learning, ethics
Original Pdf: pdf
9 Replies

Loading