Unifying Model and Layer Fusion for Speech Foundation Models
By: Yi-Jen Shih, David Harwath
Potential Business Impact:
Combines best parts of different AI voices.
Speech Foundation Models have gained significant attention recently. Prior works have shown that the fusion of representations from multiple layers of the same model or the fusion of multiple models can improve performance on downstream tasks. We unify these two fusion strategies by proposing an interface module that enables fusion across multiple upstream speech models while integrating information across their layers. We conduct extensive experiments on different self-supervised and supervised models across various speech tasks, including ASR and paralinguistic analysis, and demonstrate that our method outperforms prior fusion approaches. We further analyze its scalability concerning model size and count, highlighting the importance of selecting appropriate upstream models. Our results show that the proposed interface provides an additional performance boost when given a suitable upstream model selection, making it a promising approach for utilizing Speech Foundation Models.
Similar Papers
UniVoice: Unifying Autoregressive ASR and Flow-Matching based TTS with Large Language Models
Audio and Speech Processing
Lets computers understand and speak like people.
Layer-wise Analysis for Quality of Multilingual Synthesized Speech
Audio and Speech Processing
Makes computer voices sound more human-like.
What do Speech Foundation Models Learn? Analysis and Applications
Computation and Language
Helps computers understand spoken words better.