AI Alignment with Changing and Influenceable Reward Functions

Published: 17 Jun 2024, Last Modified: 02 Jul 2024ICML 2024 Workshop MHFAIA OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: preference changes; influence
Abstract: Existing AI alignment approaches assume that preferences are static, which is unrealistic: our preferences change, and may even be influenced by our interactions with AI systems themselves. To clarify the consequences of incorrectly assuming static preferences, we introduce Dynamic Reward Markov Decision Processes (DR-MDPs), which explicitly model preference changes and AI influence. We show that despite its convenience, the static-preference assumption may undermine the soundness of existing alignment techniques, leading them to implicitly reward AI systems for influencing user preferences in ways they may not truly want. We then explore potential solutions. First, we offer a unifying perspective on how agents' optimization horizon may partially help reduce undesirable AI influence. Then, we formalize different notions of AI alignment which account for preference change from the get-go. Comparing the strengths and limitations of 8 such notions of alignment, we find that they all either err towards causing undesirable AI influence, or are overly risk-averse, suggesting that there may not exist a straightforward solution to problems of changing preferences. As there is no avoiding grappling with changing preferences in real-world settings, this makes it all the more important to handle these issues with care, balancing risks and capabilities. We hope our work can provide conceptual clarity and constitute a first step towards AI alignment practices which \textit{explicitly} account for (and contend with) the changing and influenceable nature of human preferences.
Submission Number: 15
Loading