Keywords: governance, regulation, liability, law, policy, risk management, accountability
TL;DR: This paper describes how governance reforms could counter harmful information management practices in frontier AI development.
Abstract: Information about risks from general-purpose AI (GPAI) systems is essential for effective risk management and oversight. Both developers and external stakeholders--such as auditors, regulators, insurers, and users--need credible risk evidence to make informed decisions. However, GPAI development firms (`firms') often control how such information is produced and shared, both internally and externally. Furthermore, they may have incentives to preclude, conceal, or distort information about AI risks. Drawing on historical patterns from industries like tobacco, pharmaceuticals, and chemicals, this paper categorizes four types of information management practices that can impede risk transparency and management: (1) $\textbf{information generation}$—influencing what research is conducted and how; (2) $\textbf{information visibility}$—controlling what information is shared internally and externally; (3) $\textbf{perceived information credibility}$—shaping how risk evidence is interpreted and trusted; and (4) $\textbf{information acknowledgment}$—avoiding or limiting recognition of risks. We also present seven categories of policy options to reduce the use or impact of these practices: improving scientific oversight, altering liability and immunity rules, externalizing risk assessments, limiting confidentiality protections, protecting parties reporting risk information (e.g., whistleblowers), expanding legal privileges, and mandating experimentation. For each, we highlight parallels from other sectors, potential benefits, and key policy design challenges. Together, these insights show how governance reforms could counter harmful information management and foster more reliable evidence for AI risk oversight.
Submission Number: 35
Loading