Towards Publicly Accountable Frontier LLMs

Published: 23 Oct 2023, Last Modified: 28 Nov 2023SoLaR PosterEveryoneRevisionsBibTeX
Keywords: Fairness, Accountability, and Transparency
TL;DR: We propose a framework for assessing key design considerations for external scrutiny to ensure that AI policy is well-informed and capable of serving the public interest.
Abstract: With the increasing integration of frontier large language models (LLMs) into society and the economy, decisions related to their training, deployment, and use have far-reaching implications. These decisions should not be left solely in the hands of frontier LLM developers. LLM users, civil society and policymakers need trustworthy sources of information to steer such decisions for the better. Involving outside actors in the evaluation of these systems (external scrutiny) offers a solution: it can help provide information that is more accurate and complete. Despite encouraging signs of increasing external scrutiny of frontier LLMs, its success is not assured. In this paper, we survey six requirements for effective external scrutiny of frontier AI systems and organize them under the ASPIRE framework: Access, Searching attitude, Proportionality to the risks, Independence, Resources, and Expertise. We then illustrate how external scrutiny might function throughout the AI lifecycle.
Submission Number: 111
Loading