Keywords: Artificial Intelligence, Algorithm, Interpretability, Transparency, Algorithm Governance
Paper Type: Full papers
TL;DR: Clarify the subjects, scope, and liability of algorithmic explanation. Develop indirect explanation procedures and establish a public-private collaborative governance model. Propose the global governance of AI explanation standards.
Abstract: The “black box” nature of AI algorithms presents a profound challenge to the foundational principles of modern legal systems, specifically the attribution of liability and procedural justice. This article addresses the legal boundaries and implementation mechanisms of explainability by proposing an integrated framework that combines a hierarchical model with indirect methods. We argue that the duty to explain must be governed by the principle of proportionality, dynamically calibrating its scope to the risk level of the algorithm. A novel “systemic-contextual-outcome” three-tier explanation model is constructed, delineating distinct responsibilities for developers, deployers, and users. To resolve the inherent tension between transparency and intellectual property, an indirect explanation mechanism is proposed, utilizing alternative technical solutions for compliance. At the governance level, a public-private collaborative path is advocated, wherein the state sets mandatory framework standards and supervises enforcement, while enterprises independently develop compliance tools. This research provides critical theoretical support for advancing AI legislation and constructing a secure and self-governing AI governance framework for China.
Poster PDF: pdf
Submission Number: 17
Loading