Beyond Transcription: Mechanistic Interpretability in ASR
By: Neta Glazer , Yael Segal-Feldman , Hilit Segev and more
Potential Business Impact:
Helps computers understand speech better by seeing inside.
Interpretability methods have recently gained significant attention, particularly in the context of large language models, enabling insights into linguistic representations, error detection, and model behaviors such as hallucinations and repetitions. However, these techniques remain underexplored in automatic speech recognition (ASR), despite their potential to advance both the performance and interpretability of ASR systems. In this work, we adapt and systematically apply established interpretability methods such as logit lens, linear probing, and activation patching, to examine how acoustic and semantic information evolves across layers in ASR systems. Our experiments reveal previously unknown internal dynamics, including specific encoder-decoder interactions responsible for repetition hallucinations and semantic biases encoded deep within acoustic representations. These insights demonstrate the benefits of extending and applying interpretability techniques to speech recognition, opening promising directions for future research on improving model transparency and robustness.
Similar Papers
Unsupervised decoding of encoded reasoning using language model interpretability
Artificial Intelligence
Uncovers how AI thinks, even when hidden.
Hallucination Benchmark for Speech Foundation Models
Computation and Language
Finds fake words in computer speech.
FunAudio-ASR Technical Report
Computation and Language
Makes talking computers understand messy, noisy speech.