Score: 0

Peeking Into The Future For Contextual Biasing

Published: December 19, 2025 | arXiv ID: 2512.17657v1

By: Ramaneswaran Selvakumar , Cindy Tseng , Eesung Kim and more

Potential Business Impact:

Helps voice assistants understand names better.

Business Areas:
Semantic Search Internet Services

While end-to-end (E2E) automatic speech recognition (ASR) models excel at general transcription, they struggle to recognize rare or unseen named entities (e.g., contact names, locations), which are critical for downstream applications like virtual assistants. In this paper, we propose a contextual biasing method for attention based encoder decoder (AED) models using a list of candidate named entities. Instead of predicting only the next token, we simultaneously predict multiple future tokens, enabling the model to "peek into the future" and score potential candidate entities in the entity list. Moreover, our approach leverages the multi-token prediction logits directly without requiring additional entity encoders or cross-attention layers, significantly reducing architectural complexity. Experiments on Librispeech demonstrate that our approach achieves up to 50.34% relative improvement in named entity word error rate compared to the baseline AED model.

Page Count
5 pages

Category
Computer Science:
Computation and Language