Assessing Risks of Using Autonomous Language Models in Military and Diplomatic Planning

Published: 31 Oct 2023, Last Modified: 01 Dec 2023MASEC@NeurIPS'23 WiPPEveryoneRevisionsBibTeX
Keywords: military, multi-agent, ai, artificial intelligence, large language models, foundation models, decision-making, high-stakes
TL;DR: We investigate the behavior of autonomous agents in simulated military and foreign-policy scenarios to determine their potential for conflict escalation and the risks associated with the deployment of multi-agents in these high-stakes contexts
Abstract: The potential integration of autonomous agents in high-stakes military and foreign-policy decision-making has gained prominence, especially with the emergence of advanced generative AI models like GPT-4. This paper aims to scrutinize the behavior of multiple autonomous agents in simulated military and diplomacy scenarios, specifically focusing on their potential to escalate conflicts. Drawing on established international relations frameworks, we assessed the escalation potential of decisions made by these agents in different scenarios. Contrary to prior qualitative studies, our research provides both qualitative and quantitative insights. We find that there are significant differences in the models' predilections to escalate, with Claude 2 being the least aggressive and GPT-4-Base the most aggressive models. Our findings indicate that, even in seemingly neutral contexts, language-model-based autonomous agents occasionally opt for aggressive or provocative actions. This tendency intensifies in scenarios with predefined trigger events. Importantly, the patterns behind such escalatory behavior remain largely unpredictable. Furthermore, a qualitative analysis of the models' verbalized reasoning, particularly in the GPT-4-Base model, reveals concerning justifications. Given the high stakes involved in military and foreign-policy contexts, the deployment of such autonomous agents demands further examination and cautious consideration.
Submission Number: 20
Loading