Segmental Attention Decoding With Long Form Acoustic Encodings
By: Pawel Swietojanski , Xinwei Li , Mingbin Xu and more
Potential Business Impact:
Lets AI understand long speech without breaking it up.
We address the fundamental incompatibility of attention-based encoder-decoder (AED) models with long-form acoustic encodings. AED models trained on segmented utterances learn to encode absolute frame positions by exploiting limited acoustic context beyond segment boundaries, but fail to generalize when decoding long-form segments where these cues vanish. The model loses ability to order acoustic encodings due to permutation invariance of keys and values in cross-attention. We propose four modifications: (1) injecting explicit absolute positional encodings into cross-attention for each decoded segment, (2) long-form training with extended acoustic context to eliminate implicit absolute position encoding, (3) segment concatenation to cover diverse segmentations needed during training, and (4) semantic segmentation to align AED-decoded segments with training segments. We show these modifications close the accuracy gap between continuous and segmented acoustic encodings, enabling auto-regressive use of the attention decoder.
Similar Papers
Performance Modeling for Correlation-based Neural Decoding of Auditory Attention to Speech
Signal Processing
Helps hearing aids focus on who you're listening to.
Listening Between the Frames: Bridging Temporal Gaps in Large Audio-Language Models
Sound
Helps computers understand *when* things happen in audio.
Structure-Aware Decoding Mechanisms for Complex Entity Extraction with Large-Scale Language Models
Computation and Language
Helps computers understand complex sentences better.