Explainability in Algorithmic Decision Systems

Published: 05 Aug 2025, Last Modified: 15 Apr 2026A Companion to Applied Philosophy of AI (Blackwell)EveryonearXiv.org perpetual, non-exclusive license
Abstract: Much has been made about the opacity of certain AI-­ based decision systems. Many have argued that in high-­ stakes decision contexts, a failure to be able to interpret, explain, or justify the outputs of such systems results in a failure of our obligations to those over whom we deploy these decision systems. These obligations are typically understood as obligations to provide information to decision subjects (or their proxies) so they may assess whether they have been treated appropriately. Concerns about black box systems have motivated work on so-called “explainable AI” (“XAI”) tools and techniques to render black boxes transparent. At the same time, these concerns have been met with skepticism about both the meaning and value of explainability, especially given the opaque nature of much human decision making. In this chapter, we summarize the current state of the debate between explainability proponents and skeptics. We then go on to articulate an alternative basis for grounding explainability requirements that appeals to duties of consideration – duties decision makers have to ensure that they are reasoning about decision subjects appropriately. We explain how this alternative approach helps address explainability skepticism and orient our thinking about how decision makers ought to integrate AI-­based tools into their decision-­making processes.
Loading