Performative Reinforcement Learning in Gradually Shifting Environments

Published: 26 Apr 2024, Last Modified: 15 Jul 2024UAI 2024 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: reinforcement learning, performative prediction, convex optimization
TL;DR: We study a reinforcement learning setting where the environment gradually reacts to a policy and introduce and compare different algorithms in this setting.
Abstract: When Reinforcement Learning (RL) agents are deployed in practice, they might impact their environment and change its dynamics. We propose a new framework to model this phenomenon, where the current environment depends on the deployed policy as well as its previous dynamics. This is a generalization of Performative RL (PRL) [Mandal et al., 2023]. Unlike PRL, our framework allows to model scenarios where the environment gradually adjusts to a deployed policy. We adapt two algorithms from the performative prediction literature to our setting and propose a novel algorithm called Mixed Delayed Repeated Retraining (MDRR). We provide conditions under which these algorithms converge and compare them using three metrics: number of retrainings, approximation guarantee, and number of samples per deployment. MDRR is the first algorithm in this setting which combines samples from multiple deployments in its training. This makes MDRR particularly suitable for scenarios where the environment's response strongly depends on its previous dynamics, which are common in practice. We experimentally compare the algorithms using a simulation-based testbed and our results show that MDRR converges significantly faster than previous approaches.
Supplementary Material: zip
List Of Authors: Rank, Ben and Triantafyllou, Stelios and Mandal, Debmalya and Radanovic, Goran
Latex Source Code: zip
Signed License Agreement: pdf
Code Url: https://github.com/rank-and-files/performative-rl-gradually-shifting-envs
Submission Number: 371
Loading