A call for more explainable AI in law enforcement

Published: 01 Jan 2021, Last Modified: 12 Feb 2025EDOC Workshops 2021EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The use of AI in law enforcement raises several significant ethical and legal concerns. One of them is AI explain- ability principle, which is mentioned in numerous national and international AI ethical guidelines. This paper firstly analyses what AI explainability principle could mean with relation to AI use in law enforcement, namely, to whom, why and how the explanation about the functioning of AI and its outcomes needs to be provided. Secondly, it explores some legal obstacles in ensuring the desired explainability of AI technologies, namely, the trade secret protection that often applies to AI modules and prevents access to proprietary elements of the algorithm. Finally, the paper outlines and discusses three ways to mitigate this conflict between the AI explainability principle and trade secret protection. It encourages law enforcement authorities to be more proactive in ensuring that Face Recognition Technology (FRT) outputs are explainable to different stakeholder groups, especially those directly affected.
Loading