Perspective: Lessons from Cybersecurity for Biological AI Safety and Regulation

Published: 23 Sept 2025, Last Modified: 25 Oct 2025RegML 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Generative AI, Biosecurity, Large Language Models (LLMs), Biological Design Tools (BDTs), Zero Trust Policy
Abstract: Rise of generative artificial intelligence (AI) and its intersection with biotechnology is creating new biosecurity risks that traditional defenses cannot manage. Static, list-based systems designed to stop known threats are ill-equipped against novel pathogens that could be enabled by Large Language Models (LLMs) and advanced Biological Design Tools (BDTs). These technologies may lower barriers for inexperienced actors and accelerate the design of dangerous agents. We argue that cybersecurity offers a useful guide for responding to this challenge. Cybersecurity once relied on “castle-and-moat” defenses but shifted to resilience-based models like zero trust, which assume breach and focus on continuous verification and protection at the data level. Applying similar principles in biosecurity could enable secure tracking of biological designs, proactive testing through red-teaming, and collective defense via shared threat intelligence. This perspective calls for biosecurity to move from a reactive add-on to a secure-by-design foundation. Such a shift will require new technologies, governance, and interdisciplinary expertise to ensure that the bioeconomy advances safely and responsibly.
Submission Number: 91
Loading