ProtocoLLM: Automatic Evaluation Framework of LLMs on Domain-Specific Scientific Protocol Formulation Tasks

ACL ARR 2024 June Submission3044 Authors

15 Jun 2024 (modified: 04 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Automated generation of scientific protocols executable by robots can significantly accelerate scientific research processes. Large Language Models (LLMs) excel at Scientific Protocol Formulation Tasks (SPFT), but the evaluation of their capabilities relies on human evaluation. Here, we propose a flexible, automatic framework to evaluate LLMs' capability on SPFT: ProtocoLLM. This framework prompts the target model and GPT-4 to extract pseudocode from biology protocols using only predefined lab actions and evaluates the output of the target model using Llam-Eval, the pseudocode generated by GPT-4 serves as a baseline, and Llama-3 acts as the evaluator. Our adaptable prompt-based evaluation method, Llam-Eval, offers significant flexibility in terms of evaluation model, material, criteria, and is free of cost. We evaluate GPT variations, Llama, Mixtral, Gemma, Cohere, and Gemini. Overall, we find that GPT and Cohere are powerful scientific protocol formulators. We also introduce Bioprot 2.0, a dataset with biology protocols and corresponding pseudocodes, which can aid LLMs in the formulation and evaluation of SPFT. Our work is extensible to assess LLMs on SPFT across various domains and other fields that require protocol generation for specific goals.
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: evaluation, evaluation methodologies, automatic evaluation of language resources, NLP datasets, automatic evaluation of datasets
Contribution Types: Model analysis & interpretability, Data resources, Position papers
Languages Studied: English
Submission Number: 3044
Loading