Can AI-powered military comply with international humanitarian laws?

Published: 14 Oct 2024, Last Modified: 23 Nov 2024HRAIM PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: AI safety, autonomous weapons systems (AWS), AI regulations, International Humanitarian Law (IHL)
TL;DR: AI-powered autonomous weapons do not comply with international humanitarian law.
Abstract: Artificial Intelligence (AI) has recorded unprecedented progress in the last few years, which sparked debates about AI safety. Concerns emerged that AI is advancing too fast without considering all safety issues, which led to calls for a slowdown in AI research due to its growing impact on everyone's lives. We believe the AI community has largely overlooked autonomous weapons systems (AWS), which is a concerning use case of AI and an alarming threat to human life. We build upon current work as we question if AI in military can be safely regulated, in light of international humanitarian law (IHL). Our goal was to highlight the gap between the current state of AI use in military and international laws, and show how hard it is to legalize AWS. In future work, we will analyze, in more technical detail, how current AI systems do not comply with international humanitarian law and hence not ready to be used in wars.
Submission Number: 13
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview