Keywords: Ad Hoc Teamwork, Multi-Agent Reinforcement Learning, Adaptation, Human-AI Alignment, Environmental Symmetries, Population Diversity, Hanabi, Social Norms
TL;DR: Diversify training conventions to enable ad hoc teamwork agents to robustly coordinate with unseen partners, achieving state-of-the-art performance in games like Hanabi.
Abstract: In dynamic collaborative settings, for artificial intelligence (AI) agents to better align with humans, they must adapt to novel teammates who utilise unforeseen strategies. While adaptation is often simple for humans, it can be challenging for AI agents. Our work introduces symmetry-breaking augmentations (SBA) as a novel approach to this challenge. By applying a symmetry-flipping operation to increase behavioural diversity among training teammates, SBA encourages agents to learn robust responses to unknown strategies, highlighting how social conventions impact human-AI alignment. We demonstrate this experimentally in two settings, showing that our approach outperforms previous ad hoc teamwork results in the challenging card game Hanabi. In addition, we propose a general metric for estimating symmetry dependency amongst a given set of policies. Our findings provide insights into how AI systems can better adapt to diverse human conventions and the core mechanics of alignment.
Submission Type: Long Paper (9 Pages)
Archival Option: This is a non-archival submission
Presentation Venue Preference: ICLR 2025
Submission Number: 24
Loading