https://dl.acm.org/doi/10.1145/3531146.3533084

Published: 21 Jun 2022, Last Modified: 15 Jan 2026ACM FAccT '22EveryoneCC BY 4.0
Abstract: A spate of recent accidents and a lawsuit involving Tesla's ‘self-driving’ cars highlights the growing need for meaningful accountability when harms are caused by AI systems. Tort (or civil liability) lawsuits are one important way for victims to redress such harms. The prospect of tort liability may also prompt AI developers to take better precautions against safety risks. Tort claims of all kinds will be hindered by AI opacity: the difficulty of determining how and why complex AI systems make decisions. We address this problem by formulating and evaluating several options for mitigating AI opacity that combine expert evidence, legal argumentation, civil procedure, and Explainable AI approaches. We emphasise the need for explanations of AI systems in tort litigation to be attuned to the elements of legal ‘causes of action’ – the specific facts that must be proven to succeed in a lawsuit. We take a recent Australian case involving explainable AI evidence as a starting point from which to map contemporary Explainable AI approaches to elements of tortious causes of action, focusing on misleading conduct, negligence, and product liability for safety defects. Our work synthesizes law, legal procedure, and computer science to provide greater clarity on the opportunities and challenges for Explainable AI in civil litigation, and may prove helpful to potential litigants, to courts, and to illuminate key targets for regulatory intervention.
Loading