TICL+: A Case Study On Speech In-Context Learning for Children's Speech Recognition
By: Haolong Zheng, Yekaterina Yegorova, Mark Hasegawa-Johnson
Children's speech recognition remains challenging due to substantial acoustic and linguistic variability, limited labeled data, and significant differences from adult speech. Speech foundation models can address these challenges through Speech In-Context Learning (SICL), allowing adaptation to new domains without fine-tuning. However, the effectiveness of SICL depends on how in-context examples are selected. We extend an existing retrieval-based method, Text-Embedding KNN for SICL (TICL), introducing an acoustic reranking step to create TICL+. This extension prioritizes examples that are both semantically and acoustically aligned with the test input. Experiments on four children's speech corpora show that TICL+ achieves up to a 53.3% relative word error rate reduction over zero-shot performance and 37.6% over baseline TICL, highlighting the value of combining semantic and acoustic information for robust, scalable ASR in children's speech.
Similar Papers
TICL: Text-Embedding KNN For Speech In-Context Learning Unlocks Speech Recognition Abilities of Large Multimodal Models
Audio and Speech Processing
Helps computers understand speech better, even accents.
Schema for In-Context Learning
Computation and Language
Teaches computers to learn like humans.
Corrective In-Context Learning: Evaluating Self-Correction in Large Language Models
Computation and Language
Teaches computers to fix their own mistakes.