Conciliator steering: Imposing user preference in multi-objective reinforcement learning

Published: 28 Jun 2024, Last Modified: 28 Jun 2024Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Many real-world problems with multiple objectives require reinforcement learning solutions that can handle trade-offs in a user-preferred manner. In the multi-objective framework, a single algorithm adapting to different user preferences based on a pre-defined reward function and a subjectively defined scalarisation function may be developed. The scalarisation function approximation can be done by fitting a meta-model with information gained from the interaction between the user and the environment or the agent. The interaction requires exact formulation of a constructive feedback, which is also simple for the user to give. In this paper, we propose a novel algorithm, Conciliator steering, that leverages priority order and reward transfer to seek optimal user-preferred policies in multi-objective reinforcement learning under expected scalarised returns criterion. We test Conciliator steering on DeepSeaTreasure v1 benchmark problem and demonstrate that it can find user-preferred policies with effortless and simple user-agent interaction and negligible bias, which has not been possible before. Additionally, we show that on average Conciliator steering results in a fraction of carbon dioxide emissions and total energy consumption when compared to a training of fully connected MNIST classifier, both run on a personal laptop.
Submission Length: Long submission (more than 12 pages of main content)
Code: https://github.com/helsinki-sda-group/conciliator
Supplementary Material: zip
Assigned Action Editor: ~Furong_Huang1
Submission Number: 2111
Loading