Bridging ASR and LLMs for Dysarthric Speech Recognition: Benchmarking Self-Supervised and Generative Approaches
By: Ahmed Aboeitta , Ahmed Sharshar , Youssef Nafea and more
Potential Business Impact:
Helps computers understand speech with unclear pronunciation.
Speech Recognition (ASR) due to phoneme distortions and high variability. While self-supervised ASR models like Wav2Vec, HuBERT, and Whisper have shown promise, their effectiveness in dysarthric speech remains unclear. This study systematically benchmarks these models with different decoding strategies, including CTC, seq2seq, and LLM-enhanced decoding (BART,GPT-2, Vicuna). Our contributions include (1) benchmarking ASR architectures for dysarthric speech, (2) introducing LLM-based decoding to improve intelligibility, (3) analyzing generalization across datasets, and (4) providing insights into recognition errors across severity levels. Findings highlight that LLM-enhanced decoding improves dysarthric ASR by leveraging linguistic constraints for phoneme restoration and grammatical correction.
Similar Papers
Audio-Conditioned Diffusion LLMs for ASR and Deliberation Processing
Audio and Speech Processing
Makes computers understand spoken words better.
FunAudio-ASR Technical Report
Computation and Language
Makes talking computers understand messy, noisy speech.
FunAudio-ASR Technical Report
Computation and Language
Makes talking computers understand messy, noisy speech.