Incorporating Linguistic Constraints from External Knowledge Source for Audio-Visual Target Speech Extraction
By: Wenxuan Wu , Shuai Wang , Xixin Wu and more
Potential Business Impact:
Helps computers hear one voice in noisy rooms.
Audio-visual target speaker extraction (AV-TSE) models primarily rely on target visual cues to isolate the target speaker's voice from others. We know that humans leverage linguistic knowledge, such as syntax and semantics, to support speech perception. Inspired by this, we explore the potential of pre-trained speech-language models (PSLMs) and pre-trained language models (PLMs) as auxiliary knowledge sources for AV-TSE. In this study, we propose incorporating the linguistic constraints from PSLMs or PLMs for the AV-TSE model as additional supervision signals. Without introducing any extra computational cost during inference, the proposed approach consistently improves speech quality and intelligibility. Furthermore, we evaluate our method in multi-language settings and visual cue-impaired scenarios and show robust performance gains.
Similar Papers
ELEGANCE: Efficient LLM Guidance for Audio-Visual Target Speech Extraction
Sound
Helps computers hear the right voice in noisy rooms.
Leveraging Language Information for Target Language Extraction
Audio and Speech Processing
Lets computers hear one language in noisy crowds.
Text-Speech Language Models with Improved Cross-Modal Transfer by Aligning Abstraction Levels
Computation and Language
Makes computers understand talking and writing together.