Fine-Tuning ASR for Stuttered Speech: Personalized vs. Generalized Approaches
By: Dena Mujtaba, Nihar Mahapatra
Potential Business Impact:
Helps voice assistants understand people who stutter.
Stuttering -- characterized by involuntary disfluencies such as blocks, prolongations, and repetitions -- is often misinterpreted by automatic speech recognition (ASR) systems, resulting in elevated word error rates and making voice-driven technologies inaccessible to people who stutter. The variability of disfluencies across speakers and contexts further complicates ASR training, compounded by limited annotated stuttered speech data. In this paper, we investigate fine-tuning ASRs for stuttered speech, comparing generalized models (trained across multiple speakers) to personalized models tailored to individual speech characteristics. Using a diverse range of voice-AI scenarios, including virtual assistants and video interviews, we evaluate how personalization affects transcription accuracy. Our findings show that personalized ASRs significantly reduce word error rates, especially in spontaneous speech, highlighting the potential of tailored models for more inclusive voice technologies.
Similar Papers
Improved Dysarthric Speech to Text Conversion via TTS Personalization
Sound
Helps people with speech problems talk to computers.
Variational Low-Rank Adaptation for Personalized Impaired Speech Recognition
Audio and Speech Processing
Helps computers understand speech with problems.
Data-Efficient ASR Personalization for Non-Normative Speech Using an Uncertainty-Based Phoneme Difficulty Score for Guided Sampling
Audio and Speech Processing
Helps computers understand speech from people with disabilities.