FLM-Audio: Natural Monologues Improves Native Full-Duplex Chatbots via Dual Training
By: Yiqun Yao , Xiang Li , Xin Jiang and more
Potential Business Impact:
Lets computers talk and listen at once.
Full-duplex dialog models are designed to listen and speak simultaneously with rapid responses to fast-changing user input. Among existing approaches, native full-duplex models merges different channels (e.g. listen and speak) in a single time step, overcoming the high response latency inherent to time-division multiplexing time-division multiplexing (TDM) alternatives. Yet, a key challenge remains: aligning textual monologues with audio streams that operate at different bitrates. The prevailing solution relies on word-level alignment, but this can degrade the language ability of large pre-trained models. Moreover, it requires highly accurate timestamps for every token, which introduces cascading errors and increases pre-processing costs. In this paper, we propose textual monologues in continuous tokens sequence, namely "natural" monologues, which mimics humanoid cognitive behavior in dialogs. For temporal alignment, we alternate the position of the natural monologue - leading or trailing the audio - across different training stages. This "dual" training paradigm proves highly effective in building FLM-Audio, our 7B spoken dialog model that demonstrates superior responsiveness, duplexity, and chatting experiences, as confirmed by experimental results.
Similar Papers
FLM-Audio: Natural Monologues Improves Native Full-Duplex Chatbots via Dual Training
Sound
Lets chatbots talk and listen at once.
DialoSpeech: Dual-Speaker Dialogue Generation with LLM and Flow Matching
Audio and Speech Processing
Makes computer voices have real conversations.
From Turn-Taking to Synchronous Dialogue: A Survey of Full-Duplex Spoken Language Models
Computation and Language
Lets AI talk and listen at the same time.