Score: 0

Think, Verbalize, then Speak: Bridging Complex Thoughts and Comprehensible Speech

Published: September 19, 2025 | arXiv ID: 2509.16028v1

By: Sang Hoon Woo , Sehun Lee , Kang-wook Kim and more

Potential Business Impact:

Makes talking computers think and speak better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Spoken dialogue systems increasingly employ large language models (LLMs) to leverage their advanced reasoning capabilities. However, direct application of LLMs in spoken communication often yield suboptimal results due to mismatches between optimal textual and verbal delivery. While existing approaches adapt LLMs to produce speech-friendly outputs, their impact on reasoning performance remains underexplored. In this work, we propose Think-Verbalize-Speak, a framework that decouples reasoning from spoken delivery to preserve the full reasoning capacity of LLMs. Central to our method is verbalizing, an intermediate step that translates thoughts into natural, speech-ready text. We also introduce ReVerT, a latency-efficient verbalizer based on incremental and asynchronous summarization. Experiments across multiple benchmarks show that our method enhances speech naturalness and conciseness with minimal impact on reasoning. The project page with the dataset and the source code is available at https://yhytoto12.github.io/TVS-ReVerT

Country of Origin
🇰🇷 Korea, Republic of

Page Count
19 pages

Category
Computer Science:
Computation and Language