Keywords: Large Speech Language Models, Native Multimodal Large Language Models, End-to-End Speech Interaction
TL;DR: We propose DeepOmni, an MoE-based model that mitigates catastrophic forgetting in end-to-end native-speech multimodal learning.
Abstract: Native multimodal large language models (MLLMs) restructure a single large language model (LLM) into a spoken language model (SLM) capable of both speech and text generation. Compared to modular and aligned MLLMs, native MLLMs preserve richer paralinguistic features such as emotion and prosody, and generate speech responses directly within the backbone LLM rather than
using a separate speech decoder. This integration also results in lower response latency and smoother interaction. However, native MLLMs suffer from catastrophic forgetting and performance degradation because the available paired speech-text data is insufficient to support the pretraining of MLLMs compared to the vast amount of text data required to pretrain text LLMs.
To address this issue, we propose DeepOmni, a framework for adaptive modality expert learning based on a Mixture of Experts (MoE) architecture.
DeepOmni first adaptively distinguishes modality experts according to their modality load within the LLM. Each modality expert then undergoes specialized single-modality training, followed by joint multimodal collaborative training. As a result, DeepOmni incurs only a $5.5\%$ performance drop compared to the original LLM, which is significantly lower than the average performance drop of over $20\%$ typically seen in native MLLMs (such as GLM-4-Voice), and is on par with modular MLLMs. Meanwhile, the end-to-end dialogue latency remains within $0.5$ seconds, ensuring a seamless and intelligent speech interaction experience.
Supplementary Material: zip
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 17176
Loading