Track: Long Paper Track (up to 9 pages)
Keywords: justified trust, fairness, transparency artefacts, machine learning metadata, fairness assessment
TL;DR: This paper explores evidence-based trust in AI fairness, proposing continuous monitoring and transparent reporting to justify trust, with insights from industry workshops and a credit risk analysis case study.
Abstract: AI is becoming increasingly complex, opaque and connected to systems without human oversight. As such, ensuring trust in these systems has become challenging, yet vital. Trust is a multifaceted concept which varies over time and context, and to support users in making decisions on what to trust, work has been recently developed in the trustworthiness of systems. This includes examination of the security, privacy, safety and fairness of a system. In this work, we explore the fairness of AI systems. While mechanisms, such as formal verification, aim to guarantee properties such as fairness, their application in large-scale applications is rare due to cost and complexity issues. A major approach that is deployed in place of formal methods involves providing claims regarding the fairness of a system, with supporting evidence, to elicit justified trust in the system. Through continuous monitoring and transparent reporting of existing metadata with model experiment logs, organisations can provide reliable evidence for claims. This paper provides details of a new approach for evidence-based trust. We share our findings from a workshop with industry professionals and provide a practical example of how these concepts can be applied in a credit risk analysis system.
Submission Number: 74
Loading