GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints

Published: 07 Oct 2023, Last Modified: 01 Dec 2023EMNLP 2023 MainEveryoneRevisionsBibTeX
Submission Type: Regular Short Paper
Submission Track: Efficient Methods for NLP
Submission Track 2: Natural Language Generation
Keywords: efficient nlp, multi-query attention, fast inference
TL;DR: We present GQA, a generalization of multi-query and standard multi-head attention, and a recipe for uptraining multi-head models to fast multi-query or GQA versions.
Abstract: Multi-query attention (MQA), which only uses a single key-value head, drastically speeds up decoder inference. However, MQA can lead to quality degradation, and moreover it may not be desirable to train a separate model just for faster inference. We (1) propose a recipe for uptraining existing multi-head language model checkpoints into models with MQA using 5\% of original pre-training compute, and (2) introduce grouped-query attention (GQA), a generalization of multi-query attention which uses an intermediate (more than one, less than number of query heads) number of key-value heads. We show that uptrained GQA achieves quality close to multi-head attention with comparable speed to MQA.
Submission Number: 294
Loading