An Evaluation of Large Language Models in Bioinformatics Research

TMLR Paper1769 Authors

01 Nov 2023 (modified: 15 Jan 2024)Rejected by TMLREveryoneRevisionsBibTeX
Abstract: Large language models (LLMs) such as ChatGPT have gained considerable interest across diverse research communities. Their notable ability for text completion and generation has inaugurated a novel paradigm for language-interfaced problem solving. However, the potential and efficacy of these models in bioinformatics remain incompletely explored. In this work, we study the performance GPT variants on a wide spectrum of crucial bioinformatics tasks. These tasks include the identification of potential coding regions, extraction of named entities for genes and proteins, detection of antimicrobial and anti-cancer peptides, molecular optimization, and resolution of educational bioinformatics problems. Our findings indicate that, given appropriate prompts, LLMs like GPT variants can successfully handle most of these tasks. In addition, we provide a thorough analysis of their limitations in the context of complicated bioinformatics tasks. We envision this work to provide new perspectives and motivate future research in the field of both LLM applications and bioinformatics.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: - We have included more compared methods and expand the datasets for a comprehensive comparison. - We have summarized the significance of studying these tasks for bioinformatics research. - We have clarified the ambiguous descriptions of multiple experimental details and cleared the typos. - We have included additional experiments about comparison of distributions to show robustness of the results. - We have included more discussion about experimental results and limitations.
Assigned Action Editor: ~Colin_Raffel1
Submission Number: 1769
Loading