The Fallacy of Precision: Deconstructing the Narrative Supporting AI-Enhanced Military Weaponry

Published: 14 Oct 2024, Last Modified: 23 Nov 2024HRAIM PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Palestine, Gaza, IDF, Warfare, Philosophy, Morality, Dehumanization, AI Military, IHL, International Humanitarian Law
TL;DR: Militaristic AI being hailed as more precise and therefore more humane is a false narrative as practical applications of militaristic AI leads to high civilian death counts.
Abstract: Recent pro-military arguments have attempted to morally justify the integration of AI into military systems, claiming it leads to more precise and sophisticated weaponry. However, this narrative obscures the reality of AI-based weaponry and is deeply flawed for several reasons. First, the use of AI in military contexts is morally reprehensible, as it perpetuates violence and dehumanization through biased training data and the unethical experimentation on human lives, undermining claims of ethical justification. The development of AI-powered autonomous weapon systems often relies on datasets that reflect existing societal biases, potentially leading to discriminatory targeting and disproportionate impacts on marginalized communities. Furthermore, the refinement of these systems necessitates a troubling "trial and error" approach using real-world conflicts as testing grounds, effectively treating human lives as expendable data points for AI optimization. Second, contrary to the assertion that AI enhances precision and human control, military AI often leads to reduced oversight, human control, and accountability. In practice, AI military systems, such as autonomous weapon systems (AWS), fail to adequately distinguish between combatants and civilians. Finally, the deployment of AI in warfare contradicts current international humanitarian law (IHL), rendering its use legally indefensible. This paper aims to critically investigate the misleading philosophies driving the push for militaristic AI. This paper argues that militaristic AI is (1) morally indefensible because it necessitates extensive experimentation on human lives to develop sophisticated AI weaponry, disproportionately affecting marginalized communities in the Global South, (2) is associated with reduced human control and precision, evidenced by the high civilian toll inherent in currently deployed AI military systems, and (3) constitutes a violation of IHL. The paper presents case studies of AI systems like "Where’s Daddy?", "Lavender", and "The Gospel," employed by the Israel Defense Forces (IDF) in Palestine, demonstrating how AI-driven "kill lists" disregard civilian casualties and facilitate the automation of violence. By unmasking the deceptive rhetoric surrounding military AI, this paper aims to elicit critical discourse on the practical ramifications of the use of AI in warfare.
Submission Number: 12
Loading