SpeechGPT: Empowering Large Language Models with Intrinsic Cross-Modal Conversational Abilities

Published: 07 Oct 2023, Last Modified: 01 Dec 2023EMNLP 2023 FindingsEveryoneRevisionsBibTeX
Submission Type: Regular Long Paper
Submission Track: Theme Track: Large Language Models and the Future of NLP
Submission Track 2: Theme Track: Large Language Models and the Future of NLP
Keywords: large language model, speech, multi-modal
Abstract: Multi-modal large language models are regarded as a crucial step towards Artificial General Intelligence~(AGI) and have garnered significant interest with the emergence of ChatGPT. However, current speech-language models typically adopt the cascade paradigm, preventing inter-modal knowledge transfer. In this paper, we propose SpeechGPT, a large language model with intrinsic cross-modal conversational abilities, capable of perceiving and generating multi-modal content. With discrete speech representations, we construct SpeechInstruct, the first large-scale cross-modal speech instruction dataset. Additionally, we employ a three-stage training strategy that includes modality-adaptation pre-training, cross-modal instruction fine-tuning, and chain-of-modality instruction fine-tuning. The experimental results demonstrate that SpeechGPT has an impressive capacity to follow cross-modal human instructions and highlight the potential of handling multiple modalities with one model. Code and models are available in \url{https://github.com/0nutation/SpeechGPT}. Demos are shown in \url{https://0nutation.github.io/SpeechGPT.github.io/}.
Submission Number: 380
Loading