SynParaSpeech: Automated Synthesis of Paralinguistic Datasets for Speech Generation and Understanding
By: Bingsong Bai , Qihang Lu , Wenbing Yang and more
Potential Business Impact:
Makes computer voices sound more human.
Paralinguistic sounds, like laughter and sighs, are crucial for synthesizing more realistic and engaging speech. However, existing methods typically depend on proprietary datasets, while publicly available resources often suffer from incomplete speech, inaccurate or missing timestamps, and limited real-world relevance. To address these problems, we propose an automated framework for generating large-scale paralinguistic data and apply it to construct the SynParaSpeech dataset. The dataset comprises 6 paralinguistic categories with 118.75 hours of data and precise timestamps, all derived from natural conversational speech. Our contributions lie in introducing the first automated method for constructing large-scale paralinguistic datasets and releasing the SynParaSpeech corpus, which advances speech generation through more natural paralinguistic synthesis and enhances speech understanding by improving paralinguistic event detection. The dataset and audio samples are available at https://github.com/ShawnPi233/SynParaSpeech.
Similar Papers
SynParaSpeech: Automated Synthesis of Paralinguistic Datasets for Speech Generation and Understanding
Audio and Speech Processing
Makes computer voices sound more human.
NVSpeech: An Integrated and Scalable Pipeline for Human-Like Speech Modeling with Paralinguistic Vocalizations
Sound
Makes computers sound more human with emotions.
A Scalable Pipeline for Enabling Non-Verbal Speech Generation and Understanding
Sound
Makes computers understand and make sounds like laughing.