Petals: Collaborative Inference and Fine-tuning of Large ModelsDownload PDF

Published: 24 Oct 2022, Last Modified: 08 Sept 2024WBRC 2022 OralReaders: Everyone
Keywords: volunteer computing, distributed deep learning, distributed inference, efficient inference, large language models, gpt-3, collaborative training, collaborative inference
TL;DR: We propose a system for inference and fine-tuning of 100B+ language models collaboratively by joining the resources of multiple parties. Our system can inference BLOOM-176B over the Internet significantly faster than RAM offloading.
Abstract: Many NLP tasks benefit from using large language models (LLMs) that often have more than 100 billion parameters. With the release of BLOOM-176B and OPT-175B, everyone can download pretrained models of this scale. Still, using these models requires high-end hardware unavailable to many researchers. In some cases, LLMs can be used more affordably via RAM offloading or hosted APIs. However, these techniques have innate limitations: offloading is too slow for interactive inference, while APIs are not flexible enough for research that requires access to weights, attention or logits. In this work, we propose Petals - a system for inference and fine-tuning of large models collaboratively by joining the resources of multiple parties. We demonstrate that this strategy significantly outperforms offloading for very large models, running inference of BLOOM-176B on consumer GPUs with $\approx$ 1 step per second. Unlike most inference APIs, Petals also natively exposes the hidden states of served models, allowing its users to train and share custom model extensions based on efficient fine-tuning methods.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/petals-collaborative-inference-and-fine/code)
1 Reply

Loading