Abstract: Foundation models, first introduced in 2021, are large-scale pre-trained models (e.g., large language
models (LLMs) and vision-language models (VLMs)) that learn from extensive unlabeled datasets through
unsupervised methods, enabling them to excel in diverse downstream tasks. These models, like GPT, can
be adapted to various applications such as question answering and visual understanding, outperforming
task-specific AI models and earning their name due to broad applicability across fields. The development
of biomedical foundation models marks a significant milestone in leveraging artificial intelligence (AI) to
understand complex biological phenomena and advance medical research and practice. This survey explores
the potential of foundation models across diverse domains within biomedical fields, including computational
biology, drug discovery and development, clinical informatics, medical imaging, and public health. The
purpose of this survey is to inspire ongoing research in the application of foundation models to health
science.
Loading