Keywords: AI, ML, military, functionality, ethics, harms, philosophy
TL;DR: Based on an analysis of AI/ML systems as epistemic technologies, we propose a framework for allocating responsibility and taxonomizing harms in military uses of AI/ML.
Abstract: As the purview of artificial intelligence (AI) or machine learning (ML) continues to expand, so does the use of AI/ML in military application. How do we characterise the dimensions of ethical consideration relevant to military uses of ML? We propose that any normative deliberation over the use of ML in warfare must begin with an understanding of the function of the technology and the full range of harms that might flow from its (mis)use.
In the first place, it is necessary to understand that the methods of ML are tools for data analysis. As such, they are “epistemic technologies” (c.f., Alvarado, 2023). The methods of ML are therefore to be understood as techniques wielded by human beings towards the ends of gaining information about the world. ML-based systems are not autonomous reasoners, decision-makers, or actors, and any discussion of the ethics of their use in warfare must not treat them as such, at risk of obfuscating or wrongfully absolving human responsibility.
Of equal importance is the development of a taxonomy of potential harms which might flow from the military use of ML. We propose that, at the highest level, we ought to distinguish between use cases in which potential harms are specific to the use of ML versus those agnostic to the involvement of ML, what we term means-dependent and means-independent harms. Within the category of means-dependent harms, it is crucial to distinguish between harms which flow from the technology functioning “as intended” versus those resulting from malfunction (c.f., Raji, Kumar, Horowitz, \& Selbst, 2022). In attempting to understand how the functionality of ML systems relates to their potential for harm, it is important to recognise in which cases the learning problem is, as specified, not feasible in principle. Problem misspecification and attempts to use ML to accomplish misguided or impossible epistemic tasks pose one of the greatest ethical risks for the use of ML in any domain (c.f., Andrews, Smart, \& Birhane, 2024). Lastly, we highlight the role of “AI exceptionalism;” the assumption that the involvement of AI/ML methods makes possible tasks which are widely understood to be impossible, or renders ethical applications of interventions which are, in general, regarded as unethical. We view the discursive role played by AI/ML in modern military operations as an instance of this “AI exceptionalism” (c.f, Fang, 2024; Weirich, 2024).
We take this framework of responsibility allocation and harms analysis as a necessary starting place from which to evaluate the use of ML in military application.
REFERENCES
1. Andrews, M., Smart, A., & Birhane, A. (2024). The reanimation of pseudoscience in machine learning and its ethical repercussions. Patterns.
2. Alvarado, R. (2023). AI as an Epistemic Technology. Science and Engineering Ethics, 29(5), 32.
3. Fang, Z. (2024). Ghost in a Rhetorical Machine: Against the reification of artificial intelligence. Reboot.
4. Raji, I. D., Kumar, I. E., Horowitz, A., & Selbst, A. (2022, June). The fallacy of AI functionality. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 959-972).
5. Weirich, K. (2024). Military AI and the Illusion of Authority. Prindl Post.
Submission Number: 15
Loading