Stuttering-Aware Automatic Speech Recognition for Indonesian Language
By: Fadhil Muhammad , Alwin Djuliansah , Adrian Aryaputra Hamzah and more
Potential Business Impact:
Helps computers understand people who stutter.
Automatic speech recognition systems have achieved remarkable performance on fluent speech but continue to degrade significantly when processing stuttered speech, a limitation that is particularly acute for low-resource languages like Indonesian where specialized datasets are virtually non-existent. To overcome this scarcity, we propose a data augmentation framework that generates synthetic stuttered audio by injecting repetitions and prolongations into fluent text through a combination of rule-based transformations and large language models followed by text-to-speech synthesis. We apply this synthetic data to fine-tune a pre-trained Indonesian Whisper model using transfer learning, enabling the architecture to adapt to dysfluent acoustic patterns without requiring large-scale real-world recordings. Our experiments demonstrate that this targeted synthetic exposure consistently reduces recognition errors on stuttered speech while maintaining performance on fluent segments, validating the utility of synthetic data pipelines for developing more inclusive speech technologies in under-represented languages.
Similar Papers
Bridging the Language Gap: Synthetic Voice Diversity via Latent Mixup for Equitable Speech Recognition
Computation and Language
Helps computers understand less common languages better.
Revisiting Rule-Based Stuttering Detection: A Comprehensive Analysis of Interpretable Models for Clinical Applications
Artificial Intelligence
Helps doctors understand stuttering better.
Synthetic Voice Data for Automatic Speech Recognition in African Languages
Computation and Language
Helps computers understand many African languages.