Toward Generative Virtual Cells: Co-Evolving World Models and Perturbation Planners

Published: 02 Mar 2026, Last Modified: 13 Mar 2026Gen² 2026 PosterEveryoneRevisionsCC BY 4.0
Track: Full / long paper (5-8 pages)
Keywords: virtual cell models, single-cell -omics, single-cell transcriptomics, perturbation effect modeling, perturbation planning, world models, active learning, active experimental design, co-evolution, Bayesian optimization, closed-loop experimentation, uncertainty quantification, CRISPR screening, gene regulatory networks, lab-in-the-loop, automated experimentation, mechanistic modeling, synthetic biology, digital twins, out-of-distribution generalization, high-content screening, phenotype discovery, recursive self-improvement, adaptive agents, biologically grounded evaluation, benchmark testbed
TL;DR: We propose a self-improving agent where virtual cell models and perturbation planners co-evolve. This closed-loop framework yields >95% error reduction and superior sample efficiency in discovering phenotypes vs. static baselines.
Abstract: Data-driven models can predict cellular responses to perturbations, yet they rarely help design the next experiment. Conversely, experiment-design policies typically assume a fixed surrogate model. We propose an adaptive agent that jointly evolves a virtual cell world model and a perturbation-design policy through uncertainty-aware experiment design, with each component proposing changes that improve the other. We introduce a minimal synthetic biology testbed: a five-gene regulatory network with CRISPR-like and environmental perturbations, returning noisy single-cell readouts. Within a co-evolution loop, a Bayesian-optimization-style planner chooses perturbations based on world model predictions, while an outer loop allows the agent to modify its world-model architecture under validation-and-capacity gating. On this testbed, continual retraining within the co-evolution loop is the dominant factor driving a 95%+ reduction in in-distribution prediction error; we also identify adaptation-induced distribution shift—planner bias narrows the training distribution, degrading out-of-distribution generalization by 4×. The architecture search component correctly identifies the initial architecture as sufficient for this problem scale, demonstrating capacity-aware null detection: a desirable conservative property absent from unconstrained self-modification. The framework provides a concrete, safety-aware testbed for studying model–planner coupling in scientific domains.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Submission Number: 72
Loading