Keywords: Large Multimodal Models, GPT4o
Abstract: Recent advances in language models have achieved significant progress. GPT-4o,
as a new milestone, has enabled real-time conversations with humans, demonstrat-
ing near-human natural fluency. Such human-computer interaction necessitates
models with the capability to perform reasoning directly with the audio modality
and generate output in streaming. However, this remains beyond the reach of
current academic models, as they typically depend on extra TTS systems for speech
synthesis, resulting in undesirable latency. This paper introduces the Mini-Omni,
an audio-based end-to-end conversational model, capable of real-time speech inter-
action. To achieve this capability, we propose a text-instructed speech generation
method, along with batch-parallel strategies during inference to further boost the
performance. Our method also helps to retain the original model’s language ca-
pabilities with minimal degradation, enabling other works to establish real-time
interaction capabilities. We call this training method "Any Model Can Talk". We
also introduce the VoiceAssistant-400K dataset to fine-tune models optimized for
speech output. To our best knowledge, Mini-Omni is the first fully end-to-end,
open-source model for real-time speech interaction, offering valuable potential for
future research.
Submission Number: 26
Loading