Platypus: Quick, Cheap, and Powerful Refinement of LLMs

Published: 28 Oct 2023, Last Modified: 26 Nov 2023Instruction Workshop @ NeurIPS 2023EveryoneRevisionsBibTeX
Keywords: Large Language Models, Generative AI, Instruction-tuning, Open-source, NLP
TL;DR: Platypus combines the strengths of LoRA and detailed dataset curation to offer a high performance pipeline for fine-tuning LLMs without vast computational resources
Abstract: We present **Platypus**, a family of fine-tuned and merged Large Language Models (LLMs) that achieved the strongest performance and stood at first place in HuggingFace's Open LLM Leaderboard at the time of writing. In this work we describe (1) our curated dataset **Open-Platypus**, that is a subset of other open datasets and which we release to the public (2) our process of fine-tuning and merging LoRA modules in order to conserve the strong prior of pretrained LLMs, while bringing specific domain knowledge to the surface (3) our efforts in checking for test data leaks and contamination in the training data, which can inform future research. Specifically, the Platypus family achieves strong performance in quantitative LLM metrics across model sizes, topping the global Open LLM leaderboard while using just a fraction of the fine-tuning data and overall compute that are required for other state-of-the-art fine-tuned LLMs. In particular, a 13B Platypus model can be trained on a single A100 GPU using 25k questions in 5 hours. This is a testament of the quality of our Open-Platypus dataset, and opens opportunities for more improvements in the field.
Submission Number: 77
Loading