Abstract: Codenames is a cooperative game of constrained language communication. Success depends upon giving the best clues for your teammate, who may be unknown. Previous Codenames AI agents make the implicit assumption that their teammate is using the same language model. We show how team performance can often be improved if an agent also assumes that their teammate will only observe a noisy version of each clue. We propose and evaluate several methods for adapting the scale of this noise over time and show that by adapting the noise level we can significantly outperform agent teams that do not adapt. These results demonstrate a successful method for adapting language-model based agents so that they perform better with a variety of teammates.
Loading