Track: Sociotechnical
Keywords: AI governance, international standards, model evaluations, AI safety, risk assessment, risk mitigation, EU, AI policy, large language models, foundation models
TL;DR: We propose a dedicated GPAI Evaluation Standards Taskforce to develop and adapt GPAI evaluations standards towards robustness, reproducibility, and interoperability.
Abstract: General-purpose AI (GPAI) evaluations have been proposed as a promising way of identifying and mitigating systemic risks posed by AI development and deployment. While GPAI evaluations play an increasingly central role in institutional decision- and policy-making – including by way of the European Union (EU) AI Act’s mandate to conduct evaluations on GPAI models presenting systemic risk – no standards exist to date to promote their quality or legitimacy. To strengthen GPAI evaluations in the EU, which currently constitutes the first and only jurisdiction that mandates GPAI evaluations, we outline four desiderata for GPAI evaluations: internal validity, external validity, reproducibility, and portability. To uphold these desiderata in a dynamic environment of continuously evolving risks, we propose a dedicated EU GPAI Evaluation Standards Taskforce, to be housed within the bodies established by the EU AI Act. We outline the responsibilities of the Taskforce, specify the GPAI provider commitments that would facilitate Taskforce success, discuss the potential impact of the Taskforce on global AI governance, and address potential sources of failure that policymakers should heed.
Submission Number: 54
Loading