Modeling Bounded Rationality in Multi-Agent Simulations Using Rationally Inattentive Reinforcement Learning

Published: 17 Dec 2022, Last Modified: 28 Feb 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Multi-agent reinforcement learning (MARL) is a powerful framework for studying emergent behavior in complex agent-based simulations. However, RL agents are often assumed to be rational and behave optimally, which does not fully reflect human behavior. In this work, we propose a new, more human-like RL agent, which incorporates an established model of human-irrationality, the Rational Inattention (RI) model. RI models the cost of cognitive information processing using mutual information. Our RIRL framework generalizes and is more flexible than prior work by allowing for multi-timestep dynamics and information channels with heterogeneous processing costs. We demonstrate the flexibility of RIRL in versions of a classic economic setting (Principal-Agent setting) with varying complexity. In simple settings, we show using RIRL can lead to optimal agent behavior policy with approximately the same functional form as what is expected from the analysis of prior work, which utilizes theoretical methods. We additionally demonstrate that using RIRL to analyze complex, theoretically intractable settings, yields a rich spectrum of new equilibrium behaviors that differ from those found under rationality assumptions. For example, increasing the cognitive cost experienced by a manager agent results in the other agents increasing the magnitude of their action to compensate. These results suggest RIRL is a powerful tool towards building AI agents that can mimic real human behavior.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Marc_Lanctot1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 444