Score: 0

Chain-of-Thought Training for Open E2E Spoken Dialogue Systems

Published: May 31, 2025 | arXiv ID: 2506.00722v1

By: Siddhant Arora , Jinchuan Tian , Hayato Futami and more

Potential Business Impact:

Makes talking computers understand and respond better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Unlike traditional cascaded pipelines, end-to-end (E2E) spoken dialogue systems preserve full differentiability and capture non-phonemic information, making them well-suited for modeling spoken interactions. However, existing E2E approaches often require large-scale training data and generates responses lacking semantic coherence. We propose a simple yet effective strategy leveraging a chain-of-thought (CoT) formulation, ensuring that training on conversational data remains closely aligned with the multimodal language model (LM)'s pre-training on speech recognition~(ASR), text-to-speech synthesis (TTS), and text LM tasks. Our method achieves over 1.5 ROUGE-1 improvement over the baseline, successfully training spoken dialogue systems on publicly available human-human conversation datasets, while being compute-efficient enough to train on just 300 hours of public human-human conversation data, such as the Switchboard. We will publicly release our models and training code.

Country of Origin
🇺🇸 United States

Page Count
5 pages

Category
Computer Science:
Computation and Language