InstructCell: A Multimodal Cell Language Model for Single-cell Analysis

IJCAI 2024 Workshop AI4Research Submission10 Authors

Published: 03 Jun 2024, Last Modified: 05 Jun 2024AI4Research 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Multimodal, Single-cell analysis, Large Language Model
TL;DR: a multimodal cell language model that leverages natural language to enhance single-cell analysis
Abstract: As Large Language Models (LLMs) rapidly evolve, their influence in science is becoming increasingly prominent. The emerging capabilities of LLMs in task generalization and free-form dialogue can significantly advance fields like chemistry and biology. However, the field of single-cell biology, which forms the foundational building blocks of living organisms, still faces several challenges. High knowledge barriers and limited scalability in current methods restrict the full exploitation of LLMs in mastering single-cell data, impeding direct accessibility and rapid iteration. To this end, we introduce InstructCell, which signifies a paradigm shift by facilitating single-cell analysis with natural language. By thoroughly understanding single-cell instructions through the multimodal architecture, InstructCell has acquired profound expertise in single-cell biology and the capability to accommodate a diverse range of analysis tasks. Extensive experiments further demonstrate InstructCell's robust performance and potential to deepen single-cell insights, paving the way for more accessible and intuitive exploration in this pivotal field.
Submission Number: 10
Loading