Abstract: This paper critically evaluates the approach in Europe’s Artificial Intelligence Act (AI Act) to standards in AI regulation, and considers the suitability of 'transplanting ’that approach to Australia. The AI Act uses standards to guide the implementation of legislative requirements aimed at promoting ‘trustworthy AI’. As a result, standards bodies play the role of ‘regulatory intermediary’ (a term coined by scholars of regulatory governance such as Abbot, Snidal and Levi-Faur) interposed between government regulators and regulatory targets. We explain how Europe’s use of standards for AI regulation is shaped by a set of institutional constraints and capabilities that are distinctive to the European context. Drawing on regulatory intermediary theory, we argue that the kinds of regulatory discretion that Europe’s AI Act delegates to standards and assurance bodies – calling for difficult judgments about rights and the public interest - exceed their expertise and legitimacy. We identify challenges for inclusion in standard-making, and misaligned incentives that may undermine the goal of trustworthy AI (or in Australia, safe and responsible AI). Over-reliance on standards would be particularly problematic in Australia, where institutional arrangements are very different to Europe. We therefore make some suggestions as to how to make best use of standards for AI, and to avoid their pitfalls. AI standards may be useful for promoting trustworthy processes and for facilitating quantitative assessments of system inputs and outputs, including resource use. They will not, however, be well-suited for resolving difficult questions of ethics, public policy and law, such as how to oversee and explain life-changing automated decisions. Finally, we urge regulators to prioritise efforts to develop and support the cross-disciplinary capabilities and inclusive, deliberative institutions needed to govern AI effectively.
Loading