Start Using Justifications When Explaining AI Systems to Decision Subjects

Klára Kolářová, Timothée Schmude

Published: 01 Jan 2026, Last Modified: 06 Jan 2026CrossrefEveryoneRevisionsCC BY-SA 4.0
Abstract: Every AI system that makes decisions about people has stakeholders who are affected by its outcomes. These stakeholders, whom we call decision subjects, have a right to understand how their outcome was produced and to challenge it. Explanations should support this process by making the algorithmic system transparent and creating an understanding of its inner workings. However, we argue that while current explanation approaches focus on descriptive explanations, decision subjects also require normative explanations or justifications. In this position paper, we advocate for justifications as a key component in explanation approaches for decision subjects and make three claims to this end, namely that justifications i) fulfill decision subjects’ information needs, ii) shape their intent to accept or contest decisions, and iii) encourage accountability considerations throughout the system’s lifecycle. We propose four guiding principles for the design of justifications, provide two design examples, and close with directions for future work. With this paper, we aim to provoke thoughts on the role, value, and design of normative information in explainable AI for decision subjects.
Loading