Are machines automating morality?

22 Sept 2023 (modified: 12 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Primary Area: societal considerations including fairness, safety, privacy
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: philosophy;ethics;morality;societal considerations
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: The advent of artificial intelligence (AI) and machine learning has ignited a profound inquiry into the morality of machines. In a quest for efficiency, pleasure, comfort, we delegate and automate more and more decisions and actions to AI- based systems. In this paper, we delve into the complex interplay between artificial intelligence and morality. We thus address the fundamental question of whether machines possess morals and if machine learning systems can learn about moral values. As AI systems increasingly take on decision-making roles in our lives, ethical concerns are growing among researchers and philosophers. Making an ethical decision has always been connected to human agency. We try to highlight the prevailing utilitarian ethics found in the tech-centric Silicon Valley culture and its influence on the development of artificial intelligence (AI). As machines make more and more decisions, they consequently express a certain morality. In this paper we highlight the emergence of the idea of “moral machines” to describe machine learning systems, for instance in the context of autonomous vehicles, where AI-based systems must take ethically challenging decisions - we thus discuss the pertinence of the well-known “trolley problem” as an illustrative example to explore the utilitarian aspect of these ethical dilemmas, it applies to any domain where machines make moral choices based on patterns and data. Calling those machines “moral” underline the fact that AI systems make moral choices with- out any human intervention. However this term is not confined to autonomous vehicles. This paper examines the implications of this automated morality and how it can affect individuals’ sense of responsibility raising the questions about the future of morality. Automated values challenge the idea of responsibility and moral agency. We then call for a thoughtful and critical examination of all ethical implications of machine learning shaping our moral background. In the age of technological disruption, ethical questions surrounding automated morality must be addressed to safeguard our ethical compass.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 5717
Loading