Learning to Switch Among Agents in a Team via 2-Layer Markov Decision Processes

Published: 28 Jul 2022, Last Modified: 30 Jun 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Reinforcement learning agents have been mostly developed and evaluated under the assumption that they will operate in a fully autonomous manner---they will take all actions. In this work, our goal is to develop algorithms that, by learning to switch control between agents, allow existing reinforcement learning agents to operate under different automation levels. To this end, we first formally define the problem of learning to switch control among agents in a team via a 2-layer Markov decision process. Then, we develop an online learning algorithm that uses upper confidence bounds on the agents' policies and the environment's transition probabilities to find a sequence of switching policies. The total regret of our algorithm with respect to the optimal switching policy is sublinear in the number of learning steps and, whenever multiple teams of agents operate in a similar environment, our algorithm greatly benefits from maintaining shared confidence bounds for the environments' transition probabilities and it enjoys a better regret bound than problem-agnostic algorithms. Simulation experiments in an obstacle avoidance task illustrate our theoretical findings and demonstrate that, by exploiting the specific structure of the problem, our proposed algorithm is superior to problem-agnostic algorithms.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: 1. Changed the title of the paper to a more specific one. 2. Added a figure to separate the switching layer from the action layer of a 2-layer MDP. 3. Added citations to https://arxiv.org/abs/1903.0156 and option framework 4. Fixed the cardinality symbol $|.|$ 5. Added additional experiments to the main section and section G in the appendix. 6. Moved the figure related to the performance of machine and human agents to Appendix E. 7. Added more discussion on the human reaction time, general reward design principles, and discussion of the multiple teams of agents setting.
Code: https://github.com/vdblm/Human-Machine-MDP
Assigned Action Editor: ~David_Ha1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 43