Common Ground Provides a Mental Shortcut in Agent-Agent Interaction

Published: 2024, Last Modified: 08 Jan 2026HHAI 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: With the growing integration of chatbots, automated writing tools, game AI and similar applications into human society, there is a clear demand for artificially intelligent systems that can successfully collaborate with human partners. This requires overcoming not only physical and communicative barriers, but also those of fundamental understanding: Machines do not see and understand the world in the same way as humans do. We introduce the concept of ‘Common Ground’ (CG) as a possible solution. Using a model inspired on a collaborative card game known as ‘The Game’, we study agents that are instantiated to use different strategies, i.e., they each ‘see’ the model world in a different way. Agents work towards a joint goal that is easy to understand but complex to attain, requiring them to constantly anticipate their partner, which is classically seen as a task requiring active perspective modelling using a form of Theory of Mind. We show that agents achieving Common Ground increase their joint performance, while the need to actively model each other decreases. We discuss the implications of this finding for interaction between computational agents and humans, and suggest future extensions of our model to study the benefits of CG in hybrid human-agent settings.
Loading