Keywords: mixed autonomy traffic, collective rationality, reinforcement learning, game theory, formation of cooperation, autonomous vehicle behavior design
Abstract: Collective action of agents is essential for steering AI's impacts toward societal benefit. Exhibited in various forms, cooperation is a ubiquitous phenomenon in many socio-technical systems involving interactions of multiple agents. Mixed autonomous driving systems are characterized by complex physical-strategic interactions among agents. It is curious and meaningful to ask whether cooperation can spontaneously emerge in such systems. This paper attempts to answer this question through experimental evidence and lens of collective rationality (CR) -- a game-theoretic concept describing emergent cooperation among self-interested agents. We investigate when and how CR arises in mixed autonomous driving systems when Autonomous Vehicles (AVs) are distributedly trained through deep reinforcement learning (DRL) in simulation environments. We show that, intriguingly, simple reward design allows self-interested agents to consistently achieve CR across diverse scenarios, without explicitly including system-level incentives or transfer payments among agents. This finding serves as initial evidence on the emergence and scaling of cooperative behaviors among heterogeneous driving agents in mixed autonomy environments.
Submission Number: 37
Loading