AI and biosecurity: The need for governance

Published: 23 Aug 2024, Last Modified: 21 Aug 2025OpenReview Archive Direct UploadEveryoneCC BY 4.0
Abstract: Great benefits to humanity will likely ensue from advances in artificial intelligence (AI) models trained on or capable of meaningfully manipulating substantial quantities of biological data, from speeding up drug and vaccine design to improving crop yields (1–3). But as with any powerful new technology, such biological models will also pose considerable risks. Because of their general-purpose nature, the same biological model able to design a benign viral vector to deliver gene therapy could be used to design a more pathogenic virus capable of evading vaccine-induced immunity (4). Voluntary commitments among developers to evaluate biological models’ potential dangerous capabilities are meaningful and important but cannot stand alone. We propose that national governments, including the United States, pass legislation and set mandatory rules that will prevent advanced biological models from substantially contributing to large-scale dangers, such as the creation of novel or enhanced pathogens capable of causing major epidemics or even pandemics.
Loading