TICL: Text-Embedding KNN For Speech In-Context Learning Unlocks Speech Recognition Abilities of Large Multimodal Models
By: Haolong Zheng, Yekaterina Yegorova, Mark Hasegawa-Johnson
Potential Business Impact:
Helps computers understand speech better, even accents.
Speech foundation models have recently demonstrated the ability to perform Speech In-Context Learning (SICL). Selecting effective in-context examples is crucial for SICL performance, yet selection methodologies remain underexplored. In this work, we propose Text-Embedding KNN for SICL (TICL), a simple pipeline that uses semantic context to enhance off-the-shelf large multimodal models' speech recognition ability without fine-tuning. Across challenging automatic speech recognition tasks, including accented English, multilingual speech, and children's speech, our method enables models to surpass zero-shot performance with up to 84.7% relative WER reduction. We conduct ablation studies to show the robustness and efficiency of our method.
Similar Papers
TICL+: A Case Study On Speech In-Context Learning for Children's Speech Recognition
Audio and Speech Processing
Helps computers understand kids' talking better.
Efficient Text Classification with Conformal In-Context Learning
Computation and Language
Makes AI smarter and faster for reading text.
Unlocking In-Context Learning for Natural Datasets Beyond Language Modelling
Computation and Language
Teaches computers to learn new things from examples.