Efficiency at scale: Investigating the performance of diminutive language models in clinical tasks

Published: 01 Jan 2024, Last Modified: 12 May 2025Artif. Intell. Medicine 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Highlights•State of the art performance in Clinical NLP using efficient fine-tuning methods.•25 million parameter LLMs benefit from LoRA fine-tuning.•Classification performance matched with 98% fewer trained parameters.•Trade-off in performance in tiny LLMs outweighs cost of much larger Language Models.
Loading