Categorize Early, Integrate Late: Divergent Processing Strategies in Automatic Speech Recognition
By: Nathan Roll , Pranav Bhalerao , Martijn Bartelds and more
Potential Business Impact:
Helps computers understand speech faster or with more context.
In speech language modeling, two architectures dominate the frontier: the Transformer and the Conformer. However, it remains unknown whether their comparable performance stems from convergent processing strategies or distinct architectural inductive biases. We introduce Architectural Fingerprinting, a probing framework that isolates the effect of architecture on representation, and apply it to a controlled suite of 24 pre-trained encoders (39M-3.3B parameters). Our analysis reveals divergent hierarchies: Conformers implement a "Categorize Early" strategy, resolving phoneme categories 29% earlier in depth and speaker gender by 16% depth. In contrast, Transformers "Integrate Late," deferring phoneme, accent, and duration encoding to deep layers (49-57%). These fingerprints suggest design heuristics: Conformers' front-loaded categorization may benefit low-latency streaming, while Transformers' deep integration may favor tasks requiring rich context and cross-utterance normalization.
Similar Papers
Training-Free Spectral Fingerprints of Voice Processing in Transformers
Computation and Language
Shows how AI learns languages differently.
Automatic Speech Recognition in the Modern Era: Architectures, Training, and Evaluation
Audio and Speech Processing
Makes computers understand spoken words better.
Early Attentive Sparsification Accelerates Neural Speech Transcription
Machine Learning (CS)
Speeds up talking-to-text by making audio simpler.