Abstract: Foundation models have rapidly permeated society, catalyzing a wave of generative AI applications spanning enterprise and consumer-facing contexts. While the societal impact of foundation models is growing, transparency is on the decline, mirroring the opacity that has plagued past digital technologies (e.g. social media). Reversing this trend is essential: transparency is a vital precondition for public accountability, scientific innovation, and effective governance. To assess the transparency of the foundation model ecosystem and help improve transparency over time, we introduce the Foundation Model Transparency Index. The Foundation Model Transparency Index specifies 100 fine-grained indicators that comprehensively codify transparency for foundation models, spanning the upstream resources used to build a foundation model (e.g data, labor, compute), details about the model itself (e.g. size, capabilities, risks), and the downstream use (e.g. distribution channels, usage policies, affected geographies). We score 10 major foundation model developers (e.g. OpenAI, Google, Meta) against the 100 indicators to assess their transparency. To facilitate and standardize assessment, we score developers in relation to their practices for their flagship foundation model (e.g. GPT-4 for OpenAI, PaLM 2 for Google, Llama 2 for Meta). We present 10 top-level findings about the foundation model ecosystem: for example, no developer currently discloses significant information about the downstream impact of its flagship model, such as the number of users, affected market sectors, or how users can seek redress for harm. Overall, the Foundation Model Transparency Index establishes the level of transparency today to drive progress on foundation model governance via industry standards and regulatory intervention.
Certifications: Featured Certification
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: Change log for camera ready:
1. We have deanonymized the paper by adding (i) author information, (ii) acknowledgements, (iii) funding and conflicts of interest, and (iv) links to the live FMTI website and data.
2. We have fixed formatting and citation errors, addressing the comments of the AE, as well as the more granular comments raised in individual reviews.
3. We have rewritten text to address specific comments. Most notably, we have added new text to address a core concern raised by two reviewers of the uneven cost of improving FMTI scores, namely to advantage higher-resourced companies (e.g. those with larger legal teams). This directly addresses both of these reviewers' feedback as well as the Broader Impact comment by one reviewer.
4. We have fixed all additional typos or specific comments.
Overall, this addresses all points raised by the AE as well as addressing all the reviewer feedback as we see fit. The sole significant point raised by reviewers, which we discuss directly in the response to the reviewer, is we do not score new entities (e.g. AI2 as a non-company for models released after 2023) given that (i) we score companies for their conduct as of 2023 and (ii) we only score companies.
In total, this addresses all revisions requested, thereby completing the camera ready preparation.
Assigned Action Editor: ~Satoshi_Hara1
Submission Number: 3022
Loading