Score: 0

Large Language Models based ASR Error Correction for Child Conversations

Published: May 22, 2025 | arXiv ID: 2505.16212v2

By: Anfeng Xu , Tiantian Feng , So Hyun Kim and more

Potential Business Impact:

Makes computers understand kids' talking better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Automatic Speech Recognition (ASR) has recently shown remarkable progress, but accurately transcribing children's speech remains a significant challenge. Recent developments in Large Language Models (LLMs) have shown promise in improving ASR transcriptions. However, their applications in child speech including conversational scenarios are underexplored. In this study, we explore the use of LLMs in correcting ASR errors for conversational child speech. We demonstrate the promises and challenges of LLMs through experiments on two children's conversational speech datasets with both zero-shot and fine-tuned ASR outputs. We find that while LLMs are helpful in correcting zero-shot ASR outputs and fine-tuned CTC-based ASR outputs, it remains challenging for LLMs to improve ASR performance when incorporating contextual information or when using fine-tuned autoregressive ASR (e.g., Whisper) outputs.

Country of Origin
🇺🇸 United States

Page Count
5 pages

Category
Computer Science:
Computation and Language