SynthPert: Enhancing Biological Reasoning in LLMs via Synthetic Reasoning Traces for Cellular Perturbation Prediction

ICLR 2026 Conference Submission21008 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: cellular perturbation prediction, genomics applications, LLMs, reasoning, synthetic data, perturbation prediction
TL;DR: We develop SynthPert, an LLM fine-tuned on synthetic reasoning traces that achieves state-of-the-art performance in predicting cellular responses to genetic perturbations, outperforming even the frontier model that generated its training data.
Abstract: Predicting cellular responses to genetic perturbations represents a fundamental challenge in systems biology, critical for advancing therapeutic discovery and virtual cell modeling. While large language models (LLMs) show promise for biological reasoning, their application to perturbation prediction remains underexplored due to challenges in adapting them to structured experimental data. We present SynthPert, a novel method that enhances LLM performance through supervised fine-tuning on synthetic reasoning traces generated by frontier models. Using the PerturbQA benchmark, we demonstrate that our approach not only achieves state-of-the-art performance but surpasses the capabilities of the frontier model that generated the training data. Our results reveal three key insights: (1) Synthetic reasoning traces effectively distill biological knowledge even when partially inaccurate, (2) This approach enables cross-cell-type generalization with 87\% accuracy on unseen RPE1 cells, and (3) Performance gains persist despite using only 2\% of quality-filtered training data. This work shows the effectiveness of synthetic reasoning distillation for enhancing domain-specific reasoning in LLMs.
Primary Area: applications to physical sciences (physics, chemistry, biology, etc.)
Submission Number: 21008
Loading